Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级

环境:OEL 5.7 + Oracle 10.2.0.5 RAC

3.安装Clusterware

4.升级Clusterware

Linux平台 Oracle 10gR2 RAC安装指导:

Part1:Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part1:准备工作

Part2:Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级

Part3:Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part3:db安装和升级

3.安装Clusterware

3.1 解压clusterware安装介质

将存放Oracle相关安装介质目录赋权给Oracle用户:

[root@oradb27 media]# chown -R oracle:oinstall /u01/media/

oracle用户解压安装介质:

[oracle@oradb27 media]$ gunzip 10201_clusterware_linux_x86_64.cpio.gz
[oracle@oradb27 media]$ cpio -idmv < 10201_clusterware_linux_x86_64.cpio

执行预检查:

[root@oradb27 media]# /u01/media/clusterware/rootpre/rootpre.sh
No OraCM running

3.2 开始安装clusterware

使用Xmanager(MAC系统是XQuartz)开始安装clusterware:

[root@oradb27 media]# cd /u01/media/clusterware/install
[root@oradb27 install]# vi oraparam.ini
修改下面这里,
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2
添加redhat-5,即:
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2,redhat-5 [root@oradb27 clusterware]# pwd
/u01/media/clusterware
[root@oradb27 clusterware]# ./runInstaller

3.3 root用户按提示执行脚本

节点1执行:
```
#开始没有对/dev/sd{a,b,c,d,e},这5个LUN分区
[root@oradb27 rules.d]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 rules.d]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory

Setting up NS directories

Failed to upgrade Oracle Cluster Registry configuration

对/dev/sd{a,b,c,d,e},这5个LUN分别分区sd{a,b,c,d,e}1后执行成功

[root@oradb27 10.2.0.5]# /u01/app/oracle/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oracle/oraInventory to 770.

Changing groupname of /u01/app/oracle/oraInventory to oinstall.

The execution of the script is complete

[root@oradb27 10.2.0.5]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: oradb27 oradb27-priv oradb27

node 2: oradb28 oradb28-priv oradb28

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Now formatting voting device: /dev/raw/raw3

Now formatting voting device: /dev/raw/raw4

Now formatting voting device: /dev/raw/raw5

Format of 3 voting devices complete.

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

oradb27

CSS is inactive on these nodes.

oradb28

Local node checking complete.

Run root.sh on remaining nodes to start CRS daemons.

[root@oradb27 10.2.0.5]#

官方对这个错误的解决方法可参考MOS文档:	Executing root.sh errors with "Failed To Upgrade Oracle Cluster Registry Configuration" (文档 ID 466673.1)
> Before running the root.sh on the first node in the cluster do the following: > 1. Download Patch:4679769 from Metalink (contains a patched version of clsfmt.bin).
> 2. Do the following steps as stated in the patch README to fix the problem:
> Note: clsfmt.bin need only be replaced on the 1st node of the cluster 节点2执行:

[root@oradb28 crshome_1]# /u01/app/oracle/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oracle/oraInventory to 770.

Changing groupname of /u01/app/oracle/oraInventory to oinstall.

The execution of the script is complete

[root@oradb28 crshome_1]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: oradb27 oradb27-priv oradb27

node 2: oradb28 oradb28-priv oradb28

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been initialized

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

oradb27

oradb28

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

/u01/app/oracle/product/10.2.0.5/crshome_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

[root@oradb28 crshome_1]#


上面的这个报错信息,需要在/u01/app/oracle/product/10.2.0.5/crshome_1/bin下修改vipca和srvctl文件内容:

[root@oradb28 bin]# ls -l vipca

-rwxr-xr-x 1 oracle oinstall 5343 Jan 3 09:44 vipca

[root@oradb28 bin]# ls -l srvctl

-rwxr-xr-x 1 oracle oinstall 5828 Jan 3 09:44 srvctl

加入

unset LD_ASSUME_KERNEL


重新运行 /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Checking to see if Oracle CRS stack is already configured

Oracle CRS stack is already configured and will be running under init(1M)

没有再报错,但是也没有成功显示进行vipca创建。

<h2 id="3.4">3.4 vipca创建(可能不需要)</h2>
如果上面3.3步骤正常执行成功了vipca,那么此步骤不再需要;
如果上面3.3步骤没有正常执行成功vipca,那么就需要手工在最后一个节点手工vipca创建:
这里手工执行vipca还遇到一个错误如下:

[root@oradb28 bin]# ./vipca

Error 0(Native: listNetInterfaces:[3])

[Error 0(Native: listNetInterfaces:[3])]

查看网络层相关的信息,并手工注册信息:

[root@oradb28 bin]# ./oifcfg getif

[root@oradb28 bin]# ./oifcfg iflist

eth0 192.168.1.0

eth1 10.10.10.0

[root@oradb28 bin]# ifconfig

eth0 Link encap:Ethernet HWaddr 06:CB:72:01:07:88

inet addr:192.168.1.28 Bcast:192.168.1.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:1018747 errors:0 dropped:0 overruns:0 frame:0

TX packets:542075 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2196870487 (2.0 GiB) TX bytes:43268497 (41.2 MiB)

eth1 Link encap:Ethernet HWaddr 22:1A:5A:DE:C1:21

inet addr:10.10.10.28 Bcast:10.10.10.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:5343 errors:0 dropped:0 overruns:0 frame:0

TX packets:3656 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:1315035 (1.2 MiB) TX bytes:1219689 (1.1 MiB)

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:2193 errors:0 dropped:0 overruns:0 frame:0

TX packets:2193 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:65167 (63.6 KiB) TX bytes:65167 (63.6 KiB)

[root@oradb28 bin]# ./oifcfg -h

PRIF-9: incorrect usage

Name:

oifcfg - Oracle Interface Configuration Tool.

Usage: oifcfg iflist [-p [-n]]

oifcfg setif {-node | -global} {<if_name>/:<if_type>}...

oifcfg getif [-node | -global] [ -if <if_name>[/] [-type <if_type>] ]

oifcfg delif [-node | -global] [<if_name>[/]]

oifcfg [-help]

    <nodename> - name of the host, as known to a communications network
<if_name> - name by which the interface is configured in the system
<subnet> - subnet address of the interface
<if_type> - type of the interface { cluster_interconnect | public | storage }

[root@oradb28 bin]# ./oifcfg setif -global eth0/192.168.1.0:public

[root@oradb28 bin]# ./oifcfg getif

eth0 192.168.1.0 global public

[root@oradb28 bin]#

[root@oradb28 bin]#

[root@oradb28 bin]#

[root@oradb28 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect

[root@oradb28 bin]# ./oifcfg getif

eth0 192.168.1.0 global public

eth1 10.10.10.0 global cluster_interconnect

[root@oradb28 bin]#

当oifcfg getif正常获取信息后,再次运行VIPCA创建成功。

然后再继续回到安装clusterware的界面继续也显示成功。
此时查看集群的状态应该都是正常的:

[oracle@oradb27 bin]$ crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

[oracle@oradb27 bin]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host

ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27

ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27

ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27

ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28

ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28

ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28

[oracle@oradb27 bin]$

[oracle@oradb28 ~]$ crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

[oracle@oradb28 ~]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host

ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27

ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27

ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27

ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28

ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28

ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28

[oracle@oradb28 ~]$


<h1 id="4">4.升级Clusterware</h1>
<h2 id="4.1">4.1 解压Patchset包</h2>

[root@oradb27 media]$ unzip p8202632_10205_Linux-x86-64.zip

[root@oradb27 media]$ cd Disk1/

[root@oradb27 Disk1]$ pwd

/u01/media/Disk1



<h2 id="4.2">4.2 开始升级clusterware</h2>
使用xquartz开始升级clusterware:
ssh -X oracle@192.168.1.27

[root@oradb27 Disk1]$ ./runInstaller

升级过程中,在预安装检查时,有一个参数设置不符合检查要求,如下:

Checking for rmem_default=1048576; found rmem_default=262144. Failed <<<<

可以调整/etc/sysctl.conf配置文件,然后执行sysctl -p生效。

<h2 id="4.3">4.3 root用户按提示执行脚本</h2>

1.	Log in as the root user.
2. As the root user, perform the following tasks: a. Shutdown the CRS daemons by issuing the following command:
/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
b. Run the shell script located at:
/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
This script will automatically start the CRS daemons on the
patched node upon completion. 3. After completing this procedure, proceed to the next node and repeat.
即分别执行:

/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs

/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh


节点1执行:

[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs

Stopping resources.

Successfully stopped CRS resources

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh

Creating pre-patch directory for saving pre-patch clusterware files

Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1

Relinking some shared libraries.

Relinking of patched files is complete.

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Preparing to recopy patched init and RC scripts.

Recopying init and RC scripts.

Startup will be queued to init within 30 seconds.

Starting up the CRS daemons.

Waiting for the patched CRS daemons to start.

This may take a while on some systems.

.

10205 patch successfully applied.

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully deleted 1 values from OCR.

Successfully deleted 1 keys from OCR.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: oradb27 oradb27-priv oradb27

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

clscfg -upgrade completed successfully

Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration

Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs

[root@oradb27 bin]#


节点2执行:

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs

Stopping resources.

Successfully stopped CRS resources

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh

Creating pre-patch directory for saving pre-patch clusterware files

Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1

Relinking some shared libraries.

Relinking of patched files is complete.

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Preparing to recopy patched init and RC scripts.

Recopying init and RC scripts.

Startup will be queued to init within 30 seconds.

Starting up the CRS daemons.

Waiting for the patched CRS daemons to start.

This may take a while on some systems.

.

10205 patch successfully applied.

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully deleted 1 values from OCR.

Successfully deleted 1 keys from OCR.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 2: oradb28 oradb28-priv oradb28

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

clscfg -upgrade completed successfully

Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration

Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs

[root@oradb28 bin]#

升级成功,确认crs版本为10.2.0.5,集群状态正常:

[oracle@oradb27 bin]$ crsctl query crs activeversion

CRS active version on the cluster is [10.2.0.5.0]

[oracle@oradb28 ~]$ crsctl query crs activeversion

CRS active version on the cluster is [10.2.0.5.0]

[oracle@oradb27 ~]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host

ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27

ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27

ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27

ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28

ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28

ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28

[oracle@oradb27 ~]$

至此,oracle clusterware安装(10.2.0.1)和升级(10.2.0.5)已完成。

Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级的更多相关文章

  1. Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part3:db安装和升级

    Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part3:db安装和升级 环境:OEL 5.7 + Oracle 10.2.0.5 RAC 5.安装Database软件 5. ...

  2. Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part1:准备工作

    Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part1:准备工作 环境:OEL 5.7 + Oracle 10.2.0.5 RAC 1.实施前准备工作 1.1 服务器安装操 ...

  3. Oracle 10gR2 & 10.2.0.5 的百度网盘下载地址 :)

    如题: https://pan.baidu.com/s/1eSI770m

  4. Linux平台Oracle 12.1.0.2 单实例安装部署

    主题:Linux平台Oracle 12.1.0.2 单实例安装部署 环境:RHEL 6.5 + Oracle 12.1.0.2 需求:安装部署OEM 13.2需要Oracle 12.1.0.2版本作为 ...

  5. Oracle 10g 10.2.0.1 在Oracle Linux 5.4 32Bit RAC安装手冊(一抹曦阳)

    Oracle 10g 10.2.0.1 在Oracle Linux 5.4 32Bit RAC安装手冊(一抹曦阳).pdf下载地址 ,step by step http://download.csdn ...

  6. Linux平台oracle 11g单实例 + ASM存储 安装部署 快速参考

    操作环境:Citrix虚拟化环境中申请一个Linux6.4主机(模板)目标:创建单机11g + ASM存储 数据库 1. 主机准备 2. 创建ORACLE 用户和组成员 3. 创建以下目录并赋予对应权 ...

  7. Linux平台 Oracle 11gR2 RAC安装Part1:准备工作

    一.实施前期准备工作 1.1 服务器安装操作系统 1.2 Oracle安装介质 1.3 共享存储规划 1.4 网络规范分配 二.安装前期准备工作 2.1 各节点系统时间校对 2.2 各节点关闭防火墙和 ...

  8. Linux平台 Oracle 12cR2 RAC安装Part1:准备工作

    Linux平台 Oracle 12cR2 RAC安装Part1:准备工作 一.实施前期准备工作 1.1 服务器安装操作系统 1.2 Oracle安装介质 1.3 共享存储规划 1.4 网络规范分配 二 ...

  9. Linux平台 Oracle 12cR2 RAC安装Part2:GI配置

    Linux平台 Oracle 12cR2 RAC安装Part2:GI配置 三.GI(Grid Infrastructure)安装 3.1 解压GI的安装包 3.2 安装配置Xmanager软件 3.3 ...

随机推荐

  1. Python高手之路【六】python基础之字符串格式化

    Python的字符串格式化有两种方式: 百分号方式.format方式 百分号的方式相对来说比较老,而format方式则是比较先进的方式,企图替换古老的方式,目前两者并存.[PEP-3101] This ...

  2. 一起来玩echarts系列(一)------箱线图的分析与绘制

    一.箱线图 Box-plot 箱线图一般被用作显示数据分散情况.具体是计算一组数据的中位数.25%分位数.75%分位数.上边界.下边界,来将数据从大到小排列,直观展示数据整体的分布情况. 大部分正常数 ...

  3. 几个比较”有意思“的JS脚本

    1.获取内网和公网真实IP地址(引用地址) <!DOCTYPE html> <html> <head> <meta http-equiv="Cont ...

  4. UWP开发必备以及常用知识点总结

    一直在学UWP,一直在写Code,自己到达了什么水平?还有多少东西需要学习才能独挡一面?我想对刚接触UWP的开发者都有这种困惑,偶尔停下来总结分析一下还是很有收获的! 以下内容是自己开发中经常遇到的一 ...

  5. EntityFramework.Extended 支持 MySql

    EntityFramework.Extended 默认不支持 MySql,需要配置如下代码: [DbConfigurationType(typeof(DbContextConfiguration))] ...

  6. Mysql存储引擎及选择方法

    0x00 Mysql数据库常用存储引擎 Mysql数据库是一款开源的数据库,支持多种存储引擎的选择,比如目前最常用的存储引擎有:MyISAM,InnoDB,Memory等. MyISAM存储引擎 My ...

  7. iOS微信里打开app,Universal Links

    这两天在弄分享,从第三方应用或者浏览器打开自己app的东西 传统的方式是通过URL Scheme的方式,但是iOS9以后又出了新的更完美的方式Universal Links. 传统的URL Schem ...

  8. 手把手教你写一个RN小程序!

    时间过得真快,眨眼已经快3年了! 1.我的第一个App 还记得我14年初写的第一个iOS小程序,当时是给别人写的一个单机的相册,也是我开发的第一个完整的app,虽然功能挺少,但是耐不住心中的激动啊,现 ...

  9. Tomcat启动报错org.springframework.web.context.ContextLoaderListener类配置错误——SHH框架

    SHH框架工程,Tomcat启动报错org.springframework.web.context.ContextLoaderListener类配置错误 1.查看配置文件web.xml中是否配置.or ...

  10. SuperMap-iServer-单点登录功能验证(CAS)

    SuperMap-iServer-单点登录功能验证(CAS) 1.测试目的: 验证SuperMap-iServer使用CAS单点登录的功能是否正常. 2.测试环境: SuperMap-iServer8 ...