Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级

环境:OEL 5.7 + Oracle 10.2.0.5 RAC

3.安装Clusterware

4.升级Clusterware

Linux平台 Oracle 10gR2 RAC安装指导:

Part1:Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part1:准备工作

Part2:Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级

Part3:Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part3:db安装和升级

3.安装Clusterware

3.1 解压clusterware安装介质

将存放Oracle相关安装介质目录赋权给Oracle用户:

[root@oradb27 media]# chown -R oracle:oinstall /u01/media/

oracle用户解压安装介质:

[oracle@oradb27 media]$ gunzip 10201_clusterware_linux_x86_64.cpio.gz
[oracle@oradb27 media]$ cpio -idmv < 10201_clusterware_linux_x86_64.cpio

执行预检查:

[root@oradb27 media]# /u01/media/clusterware/rootpre/rootpre.sh
No OraCM running

3.2 开始安装clusterware

使用Xmanager(MAC系统是XQuartz)开始安装clusterware:

[root@oradb27 media]# cd /u01/media/clusterware/install
[root@oradb27 install]# vi oraparam.ini
修改下面这里,
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2
添加redhat-5,即:
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2,redhat-5 [root@oradb27 clusterware]# pwd
/u01/media/clusterware
[root@oradb27 clusterware]# ./runInstaller

3.3 root用户按提示执行脚本

节点1执行:
```
#开始没有对/dev/sd{a,b,c,d,e},这5个LUN分区
[root@oradb27 rules.d]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 rules.d]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory

Setting up NS directories

Failed to upgrade Oracle Cluster Registry configuration

对/dev/sd{a,b,c,d,e},这5个LUN分别分区sd{a,b,c,d,e}1后执行成功

[root@oradb27 10.2.0.5]# /u01/app/oracle/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oracle/oraInventory to 770.

Changing groupname of /u01/app/oracle/oraInventory to oinstall.

The execution of the script is complete

[root@oradb27 10.2.0.5]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: oradb27 oradb27-priv oradb27

node 2: oradb28 oradb28-priv oradb28

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Now formatting voting device: /dev/raw/raw3

Now formatting voting device: /dev/raw/raw4

Now formatting voting device: /dev/raw/raw5

Format of 3 voting devices complete.

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

oradb27

CSS is inactive on these nodes.

oradb28

Local node checking complete.

Run root.sh on remaining nodes to start CRS daemons.

[root@oradb27 10.2.0.5]#

官方对这个错误的解决方法可参考MOS文档:	Executing root.sh errors with "Failed To Upgrade Oracle Cluster Registry Configuration" (文档 ID 466673.1)
> Before running the root.sh on the first node in the cluster do the following: > 1. Download Patch:4679769 from Metalink (contains a patched version of clsfmt.bin).
> 2. Do the following steps as stated in the patch README to fix the problem:
> Note: clsfmt.bin need only be replaced on the 1st node of the cluster 节点2执行:

[root@oradb28 crshome_1]# /u01/app/oracle/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oracle/oraInventory to 770.

Changing groupname of /u01/app/oracle/oraInventory to oinstall.

The execution of the script is complete

[root@oradb28 crshome_1]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: oradb27 oradb27-priv oradb27

node 2: oradb28 oradb28-priv oradb28

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been initialized

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

oradb27

oradb28

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

/u01/app/oracle/product/10.2.0.5/crshome_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

[root@oradb28 crshome_1]#


上面的这个报错信息,需要在/u01/app/oracle/product/10.2.0.5/crshome_1/bin下修改vipca和srvctl文件内容:

[root@oradb28 bin]# ls -l vipca

-rwxr-xr-x 1 oracle oinstall 5343 Jan 3 09:44 vipca

[root@oradb28 bin]# ls -l srvctl

-rwxr-xr-x 1 oracle oinstall 5828 Jan 3 09:44 srvctl

加入

unset LD_ASSUME_KERNEL


重新运行 /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Checking to see if Oracle CRS stack is already configured

Oracle CRS stack is already configured and will be running under init(1M)

没有再报错,但是也没有成功显示进行vipca创建。

<h2 id="3.4">3.4 vipca创建(可能不需要)</h2>
如果上面3.3步骤正常执行成功了vipca,那么此步骤不再需要;
如果上面3.3步骤没有正常执行成功vipca,那么就需要手工在最后一个节点手工vipca创建:
这里手工执行vipca还遇到一个错误如下:

[root@oradb28 bin]# ./vipca

Error 0(Native: listNetInterfaces:[3])

[Error 0(Native: listNetInterfaces:[3])]

查看网络层相关的信息,并手工注册信息:

[root@oradb28 bin]# ./oifcfg getif

[root@oradb28 bin]# ./oifcfg iflist

eth0 192.168.1.0

eth1 10.10.10.0

[root@oradb28 bin]# ifconfig

eth0 Link encap:Ethernet HWaddr 06:CB:72:01:07:88

inet addr:192.168.1.28 Bcast:192.168.1.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:1018747 errors:0 dropped:0 overruns:0 frame:0

TX packets:542075 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2196870487 (2.0 GiB) TX bytes:43268497 (41.2 MiB)

eth1 Link encap:Ethernet HWaddr 22:1A:5A:DE:C1:21

inet addr:10.10.10.28 Bcast:10.10.10.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:5343 errors:0 dropped:0 overruns:0 frame:0

TX packets:3656 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:1315035 (1.2 MiB) TX bytes:1219689 (1.1 MiB)

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:2193 errors:0 dropped:0 overruns:0 frame:0

TX packets:2193 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:65167 (63.6 KiB) TX bytes:65167 (63.6 KiB)

[root@oradb28 bin]# ./oifcfg -h

PRIF-9: incorrect usage

Name:

oifcfg - Oracle Interface Configuration Tool.

Usage: oifcfg iflist [-p [-n]]

oifcfg setif {-node | -global} {<if_name>/:<if_type>}...

oifcfg getif [-node | -global] [ -if <if_name>[/] [-type <if_type>] ]

oifcfg delif [-node | -global] [<if_name>[/]]

oifcfg [-help]

    <nodename> - name of the host, as known to a communications network
<if_name> - name by which the interface is configured in the system
<subnet> - subnet address of the interface
<if_type> - type of the interface { cluster_interconnect | public | storage }

[root@oradb28 bin]# ./oifcfg setif -global eth0/192.168.1.0:public

[root@oradb28 bin]# ./oifcfg getif

eth0 192.168.1.0 global public

[root@oradb28 bin]#

[root@oradb28 bin]#

[root@oradb28 bin]#

[root@oradb28 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect

[root@oradb28 bin]# ./oifcfg getif

eth0 192.168.1.0 global public

eth1 10.10.10.0 global cluster_interconnect

[root@oradb28 bin]#

当oifcfg getif正常获取信息后,再次运行VIPCA创建成功。

然后再继续回到安装clusterware的界面继续也显示成功。
此时查看集群的状态应该都是正常的:

[oracle@oradb27 bin]$ crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

[oracle@oradb27 bin]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host

ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27

ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27

ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27

ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28

ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28

ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28

[oracle@oradb27 bin]$

[oracle@oradb28 ~]$ crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

[oracle@oradb28 ~]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host

ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27

ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27

ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27

ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28

ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28

ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28

[oracle@oradb28 ~]$


<h1 id="4">4.升级Clusterware</h1>
<h2 id="4.1">4.1 解压Patchset包</h2>

[root@oradb27 media]$ unzip p8202632_10205_Linux-x86-64.zip

[root@oradb27 media]$ cd Disk1/

[root@oradb27 Disk1]$ pwd

/u01/media/Disk1



<h2 id="4.2">4.2 开始升级clusterware</h2>
使用xquartz开始升级clusterware:
ssh -X oracle@192.168.1.27

[root@oradb27 Disk1]$ ./runInstaller

升级过程中,在预安装检查时,有一个参数设置不符合检查要求,如下:

Checking for rmem_default=1048576; found rmem_default=262144. Failed <<<<

可以调整/etc/sysctl.conf配置文件,然后执行sysctl -p生效。

<h2 id="4.3">4.3 root用户按提示执行脚本</h2>

1.	Log in as the root user.
2. As the root user, perform the following tasks: a. Shutdown the CRS daemons by issuing the following command:
/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
b. Run the shell script located at:
/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
This script will automatically start the CRS daemons on the
patched node upon completion. 3. After completing this procedure, proceed to the next node and repeat.
即分别执行:

/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs

/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh


节点1执行:

[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs

Stopping resources.

Successfully stopped CRS resources

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh

Creating pre-patch directory for saving pre-patch clusterware files

Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1

Relinking some shared libraries.

Relinking of patched files is complete.

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Preparing to recopy patched init and RC scripts.

Recopying init and RC scripts.

Startup will be queued to init within 30 seconds.

Starting up the CRS daemons.

Waiting for the patched CRS daemons to start.

This may take a while on some systems.

.

10205 patch successfully applied.

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully deleted 1 values from OCR.

Successfully deleted 1 keys from OCR.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 1: oradb27 oradb27-priv oradb27

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

clscfg -upgrade completed successfully

Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration

Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs

[root@oradb27 bin]#


节点2执行:

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs

Stopping resources.

Successfully stopped CRS resources

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh

Creating pre-patch directory for saving pre-patch clusterware files

Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1

Relinking some shared libraries.

Relinking of patched files is complete.

WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root

WARNING: directory '/u01/app/oracle/product' is not owned by root

WARNING: directory '/u01/app/oracle' is not owned by root

WARNING: directory '/u01/app' is not owned by root

Preparing to recopy patched init and RC scripts.

Recopying init and RC scripts.

Startup will be queued to init within 30 seconds.

Starting up the CRS daemons.

Waiting for the patched CRS daemons to start.

This may take a while on some systems.

.

10205 patch successfully applied.

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully deleted 1 values from OCR.

Successfully deleted 1 keys from OCR.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node :

node 2: oradb28 oradb28-priv oradb28

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

clscfg -upgrade completed successfully

Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration

Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs

[root@oradb28 bin]#

升级成功,确认crs版本为10.2.0.5,集群状态正常:

[oracle@oradb27 bin]$ crsctl query crs activeversion

CRS active version on the cluster is [10.2.0.5.0]

[oracle@oradb28 ~]$ crsctl query crs activeversion

CRS active version on the cluster is [10.2.0.5.0]

[oracle@oradb27 ~]$ crs_stat -t -v

Name Type R/RA F/FT Target State Host

ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27

ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27

ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27

ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28

ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28

ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28

[oracle@oradb27 ~]$

至此,oracle clusterware安装(10.2.0.1)和升级(10.2.0.5)已完成。

Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级的更多相关文章

  1. Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part3:db安装和升级

    Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part3:db安装和升级 环境:OEL 5.7 + Oracle 10.2.0.5 RAC 5.安装Database软件 5. ...

  2. Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part1:准备工作

    Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part1:准备工作 环境:OEL 5.7 + Oracle 10.2.0.5 RAC 1.实施前准备工作 1.1 服务器安装操 ...

  3. Oracle 10gR2 & 10.2.0.5 的百度网盘下载地址 :)

    如题: https://pan.baidu.com/s/1eSI770m

  4. Linux平台Oracle 12.1.0.2 单实例安装部署

    主题:Linux平台Oracle 12.1.0.2 单实例安装部署 环境:RHEL 6.5 + Oracle 12.1.0.2 需求:安装部署OEM 13.2需要Oracle 12.1.0.2版本作为 ...

  5. Oracle 10g 10.2.0.1 在Oracle Linux 5.4 32Bit RAC安装手冊(一抹曦阳)

    Oracle 10g 10.2.0.1 在Oracle Linux 5.4 32Bit RAC安装手冊(一抹曦阳).pdf下载地址 ,step by step http://download.csdn ...

  6. Linux平台oracle 11g单实例 + ASM存储 安装部署 快速参考

    操作环境:Citrix虚拟化环境中申请一个Linux6.4主机(模板)目标:创建单机11g + ASM存储 数据库 1. 主机准备 2. 创建ORACLE 用户和组成员 3. 创建以下目录并赋予对应权 ...

  7. Linux平台 Oracle 11gR2 RAC安装Part1:准备工作

    一.实施前期准备工作 1.1 服务器安装操作系统 1.2 Oracle安装介质 1.3 共享存储规划 1.4 网络规范分配 二.安装前期准备工作 2.1 各节点系统时间校对 2.2 各节点关闭防火墙和 ...

  8. Linux平台 Oracle 12cR2 RAC安装Part1:准备工作

    Linux平台 Oracle 12cR2 RAC安装Part1:准备工作 一.实施前期准备工作 1.1 服务器安装操作系统 1.2 Oracle安装介质 1.3 共享存储规划 1.4 网络规范分配 二 ...

  9. Linux平台 Oracle 12cR2 RAC安装Part2:GI配置

    Linux平台 Oracle 12cR2 RAC安装Part2:GI配置 三.GI(Grid Infrastructure)安装 3.1 解压GI的安装包 3.2 安装配置Xmanager软件 3.3 ...

随机推荐

  1. 0-1背包问题蛮力法求解(c++版本)

    // 0.1背包求解.cpp : 定义控制台应用程序的入口点. // #include "stdafx.h" #include <iostream>   #define ...

  2. 【.net 深呼吸】跨应用程序域执行程序集

    应用程序域,你在网上可以查到它的定义,凡是概念性的东西,大伙儿只需要会搜索就行,内容看了就罢,不用去记忆,更不用去背,“名词解释”是大学考试里面最无聊最没水平的题型. 简单地说,应用程序域让你可以在一 ...

  3. jQuery学习之路(5)- 简单的表单应用

    ▓▓▓▓▓▓ 大致介绍 接下来的这几个博客是对前面所学知识的一个简单的应用,来加深理解 ▓▓▓▓▓▓ 单行文本框 只介绍一个简单的样式:获取和失去焦点改变样式 基本结构: <form actio ...

  4. 手动添加kdump

    背景:     Linux嵌入式设备内核挂死后,无法自动重启,需要手动重启.而且如果当时没有连串口的话,就无法记录内核挂死时的堆栈,所以需要添加一种方式来记录内核挂死信息方便以后调试使用.设备中增加k ...

  5. UWP开发之ORM实践:如何使用Entity Framework Core做SQLite数据持久层?

    选择SQLite的理由 在做UWP开发的时候我们首选的本地数据库一般都是Sqlite,我以前也不知道为啥?后来仔细研究了一下也是有原因的: 1,微软做的UWP应用大部分也是用Sqlite.或者说是微软 ...

  6. git-2.10.2-64-bit介绍&&git下载&&git安装教程

    Git介绍 分布式:Git系统是一个分布式的系统,是用来保存工程源代码历史状态的命令行工具. 保存点:Git的保存点可以追踪源码中的文件, 并能得到某一个时间点上的整个工程项目的状态:可以在该保存点将 ...

  7. python 数据类型---文件二

    1.打印进度条 import sys,time for i in range(20): sys.stdout.write("#") sys.stdout.flush() #不等缓冲 ...

  8. iOS之绘制虚线

    /*   ** lineFrame:     虚线的 frame   ** length:        虚线中短线的宽度   ** spacing:       虚线中短线之间的间距   ** co ...

  9. NodeJS使用mysql

    1.环境准备 手动添加数据库依赖: 在package.json的dependencies中新增, "mysql" : "latest", { "nam ...

  10. FineReport关于tomcat集群部署的方案

    多台服务器集群后,配置权限.数据连接.模板.定时调度等,只能每台服务器一个个配置,不会自动同步到所有服务器. 针对上述情况,在FineReport中提供新集群部署插件,将xml配置文件.finedb/ ...