概要

class1 week3的任务是实现单隐层的神经网络代码,而本次任务是实现有L层的多层深度全连接神经网络。关键点跟class3的基本相同,算清各个参数的维度即可。

关键变量:

  • m: 训练样本数量
  • n[l]:第l层的节点数量,输入认为是第0层
  • 方括号上标[l]: 第l层
  • 圆括号上标(i): 第i个样本

$$
X =
\left[
\begin{matrix}
\vdots & \vdots & \vdots & \vdots \\
x^{(1)} & x^{(2)} & \vdots & x^{(m)} \\
\vdots & \vdots & \vdots & \vdots \\
\end{matrix}
\right]_{(n[0], m)}
$$

$$
W^{[l]} =
\left[
\begin{matrix}
\cdots & w^{[l] T}_1 & \cdots \\
\cdots & w^{[l] T}_2 & \cdots \\
\cdots & \cdots & \cdots \\
\cdots & w^{[l] T}_{n[l]} & \cdots \\
\end{matrix}
\right]_{(n[l], n[l-1])}
$$

$$
b^{[l]} =
\left[
\begin{matrix}
b^{[l]}_1 \\
b^{[l]}_2 \\
\vdots \\
b^{[l]}_{n[l]} \\
\end{matrix}
\right]_{(n[l], 1)}
$$

$$
A^{[l]}=
\left[
\begin{matrix}
\vdots & \vdots & \vdots & \vdots \\
a^{[l](1)} & a^{[l](2)} & \vdots & a^{[l](m)} \\
\vdots & \vdots & \vdots & \vdots \\
\end{matrix}
\right]_{(n[l], m)}
$$

$$
Z^{[l]}=
\left[
\begin{matrix}
\vdots & \vdots & \vdots & \vdots \\
z^{[l](1)} & z^{[l](2)} & \vdots & z^{[l](m)} \\
\vdots & \vdots & \vdots & \vdots \\
\end{matrix}
\right]_{(n[l], m)}
$$

***

深度神经网络关键公式:

  • 前向传播

$$Z^{[l]}=W^{[l]}A^{[l-1]}+b^{[l]}$$
$$A^{[l]}=g^{[l]}(Z^{[l]})$$

当l < L - 1时,\(g^{[l]}\)=relu函数

当l = L时,\(g^{[L]}\)=sigmoid函数

即,输出层激活函数用sigmoid,其他层激活函数用relu函数。

  • 反向传播

$$ dZ^{[l]} = dA^{[l]} * g'(Z^{[l]})$$
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]}$$

初始化dAL:

dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
  • cost计算

$$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right))$$

深度全连接神经网代码:

  • 关键函数:
# 初始化参数,每一层的权重初始化为随机
# 输入layer_dims是每一层的节点数
# 输出parameters是字典,可以通过parameters['W' + str(l)],parameters['b' + str(l)]获取每一层的初始参数
parameters = initialize_parameters_deep(layer_dims) # 线性前向传播函数
# 根据Z = W*A_prev + b计算当前层的Z, linear_cache=(A_prev, W, b)
Z, linear_cache = linear_forward(A_prev, W, b) # 线性激活前向传播函数
# 根据A = g(Z) = g(W*A_prev + b)计算前向传播函数, 其中linear_activation_cache=(linear_cache, activation_cache)=((A_prev, W, b), (Z))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid") # L层完整的前向传输过程,输出的AL是最终输出,caches是每一层的缓存
AL, caches = L_model_forward(X, parameters) # 线性反向传播函数
# 通过上述反向传播函数,通过dZ推导出dA_prev, dW, db,其中利用了缓存结果
dA_prev, dW, db = linear_backward(dZ, linear_cache) # 线性激活函数反向传播
# 通过前面的linear_backward和激活函数导数计算dA_prev, dW, db
dA_prev, dW, db = linear_activation_backward(dA, linear_activation_cache, activation = "sigmoid") # L层反向传播
# grads是每一层的导数,grads["dA" + str(l)], grads["dW" + str(l)], grads["db" + str(l)]格式
grads = L_model_backward(AL, Y, caches) # 根据学习速率跟新参数
parameters = update_parameters(parameters, grads, 0.1) # 整体模型函数,通过迭代次数循环调用上述前向传播和反向传播函数实现
parameters = L_layer_model(train_x, train_y, layers_dims, learning_rate = 0.0075, num_iterations = 2500, print_cost = True)
  • 完整代码:
import numpy as np
import matplotlib.pyplot as plt
import h5py def sigmoid(Z):
"""
Implements the sigmoid activation in numpy Arguments:
Z -- numpy array of any shape Returns:
A -- output of sigmoid(z), same shape as Z
cache -- returns Z as well, useful during backpropagation
""" A = 1/(1+np.exp(-Z))
cache = Z return A, cache def relu(Z):
"""
Implement the RELU function. Arguments:
Z -- Output of the linear layer, of any shape Returns:
A -- Post-activation parameter, of the same shape as Z
cache -- a python dictionary containing "A" ; stored for computing the backward pass efficiently
""" A = np.maximum(0,Z) assert(A.shape == Z.shape) cache = Z
return A, cache def relu_backward(dA, cache):
"""
Implement the backward propagation for a single RELU unit. Arguments:
dA -- post-activation gradient, of any shape
cache -- 'Z' where we store for computing backward propagation efficiently Returns:
dZ -- Gradient of the cost with respect to Z
""" Z = cache
dZ = np.array(dA, copy=True) # just converting dz to a correct object. # When z <= 0, you should set dz to 0 as well.
dZ[Z <= 0] = 0 assert (dZ.shape == Z.shape) return dZ def sigmoid_backward(dA, cache):
"""
Implement the backward propagation for a single SIGMOID unit. Arguments:
dA -- post-activation gradient, of any shape
cache -- 'Z' where we store for computing backward propagation efficiently Returns:
dZ -- Gradient of the cost with respect to Z
""" Z = cache s = 1/(1+np.exp(-Z))
dZ = dA * s * (1-s) assert (dZ.shape == Z.shape) return dZ def load_data():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels classes = np.array(test_dataset["list_classes"][:]) # the list of classes train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0])) return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
""" np.random.seed(1)
parameters = {}
L = len(layer_dims) # number of layers in the network for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) / np.sqrt(layer_dims[l-1]) #*0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation. Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1) Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
""" Z = W.dot(A) + b assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b) return Z, cache def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
""" if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z) elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z) assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache) return A, cache def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep() Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
""" caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network # Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], activation = "relu")
caches.append(cache) # Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], activation = "sigmoid")
caches.append(cache) assert(AL.shape == (1,X.shape[1])) return AL, caches def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7). Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples) Returns:
cost -- cross-entropy cost
""" m = Y.shape[1] # Compute loss from aL and y.
cost = (1./m) * (-np.dot(Y,np.log(AL).T) - np.dot(1-Y, np.log(1-AL).T)) cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ()) return cost def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l) Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1] dW = 1./m * np.dot(dZ,A_prev.T)
db = 1./m * np.sum(dZ, axis = 1, keepdims = True)
dA_prev = np.dot(W.T,dZ) assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape) return dA_prev, dW, db def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer. Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache if activation == "relu":
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache) elif activation == "sigmoid":
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache) return dA_prev, dW, db def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (there are (L-1) or them, indexes from 0 to L-2)
the cache of linear_activation_forward() with "sigmoid" (there is one, index L-1) Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL # Initializing the backpropagation
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
current_cache = caches[L-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid") for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache, activation = "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp return grads def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
""" L = len(parameters) // 2 # number of layers in the neural network # Update rule for each parameter. Use a for loop.
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)] return parameters # GRADED FUNCTION: L_layer_model def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID. Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
""" np.random.seed(1)
costs = [] # keep track of cost # Parameters initialization.
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ### # Loop (gradient descent)
for i in range(0, num_iterations): # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ### # Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ### # Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ### # Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ### # Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost) # plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show() return parameters def predict(X, y, parameters):
"""
This function is used to predict the results of a L-layer neural network. Arguments:
X -- data set of examples you would like to label
parameters -- parameters of the trained model Returns:
p -- predictions for the given dataset X
""" m = X.shape[1]
n = len(parameters) // 2 # number of layers in the neural network
p = np.zeros((1,m)) # Forward propagation
probas, caches = L_model_forward(X, parameters) # convert probas to 0/1 predictions
for i in range(0, probas.shape[1]):
if probas[0,i] > 0.5:
p[0,i] = 1
else:
p[0,i] = 0 #print results
#print ("predictions: " + str(p))
#print ("true labels: " + str(y))
print("Accuracy: " + str(np.sum((p == y)/m))) return p train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
layers_dims = [12288, 20, 7, 5, 1] parameters = L_layer_model(train_x, train_y, layers_dims, learning_rate = 0.0075, num_iterations = 2500, print_cost = True)
predictions_train = predict(train_x, train_y, parameters)
pred_test = predict(test_x, test_y, parameters)

【深度学习】吴恩达网易公开课练习(class1 week4)的更多相关文章

  1. 【深度学习】吴恩达网易公开课练习(class1 week2)

    知识点汇总 作业内容:用logistic回归对猫进行分类 numpy知识点: 查看矩阵维度: x.shape 初始化0矩阵: np.zeros((dim1, dim2)) 去掉矩阵中大小是1的维度: ...

  2. 【深度学习】吴恩达网易公开课练习(class1 week3)

    知识点梳理 python工具使用: sklearn: 数据挖掘,数据分析工具,内置logistic回归 matplotlib: 做图工具,可绘制等高线等 绘制散点图: plt.scatter(X[0, ...

  3. 【深度学习】吴恩达网易公开课练习(class2 week1 task2 task3)

    正则化 定义:正则化就是在计算损失函数时,在损失函数后添加权重相关的正则项. 作用:减少过拟合现象 正则化有多种,有L1范式,L2范式等.一种常用的正则化公式 \[J_{regularized} = ...

  4. 【深度学习】吴恩达网易公开课练习(class2 week1)

    权重初始化 参考资料: 知乎 CSDN 权重初始化不能全部为0,不能都是同一个值.原因是,如果所有的初始权重是相同的,那么根据前向和反向传播公式,之后每一个权重的迭代过程也是完全相同的.结果就是,无论 ...

  5. 深度学习 吴恩达深度学习课程2第三周 tensorflow实践 参数初始化的影响

    博主 撸的  该节 代码 地址 :https://github.com/LemonTree1994/machine-learning/blob/master/%E5%90%B4%E6%81%A9%E8 ...

  6. cousera 深度学习 吴恩达 第一课 第二周 学习率对优化结果的影响

    本文代码实验地址: https://github.com/guojun007/logistic_regression_learning_rate cousera 上的作业是 编写一个 logistic ...

  7. 2017年度好视频,吴恩达、李飞飞、Hinton、OpenAI、NIPS、CVPR、CS231n全都在

    我们经常被问:机器翻译迭代了好几轮,专业翻译的饭碗都端不稳了,字幕组到底还能做什么? 对于这个问题,我们自己感受最深,却又来不及解释,就已经边感受边做地冲出去了很远,摸爬滚打了一整年. 其实,现在看来 ...

  8. 第19月第8天 斯坦福大学公开课机器学习 (吴恩达 Andrew Ng)

    1.斯坦福大学公开课机器学习 (吴恩达 Andrew Ng) http://open.163.com/special/opencourse/machinelearning.html 笔记 http:/ ...

  9. 吴恩达深度学习第4课第3周编程作业 + PIL + Python3 + Anaconda环境 + Ubuntu + 导入PIL报错的解决

    问题描述: 做吴恩达深度学习第4课第3周编程作业时导入PIL包报错. 我的环境: 已经安装了Tensorflow GPU 版本 Python3 Anaconda 解决办法: 安装pillow模块,而不 ...

随机推荐

  1. if 语句

    if 判断条件的时候,如果是多个条件一起进行判断,那么就需要逻辑运算符   并且-----------and 或者-----------or 非(取反)----not   if 条件1 and 条件2 ...

  2. 【译】第五篇 SQL Server安全架构和安全

    本篇文章是SQL Server安全系列的第五篇,详细内容请参考原文. 架构本质上是一个数据库对象,其他对象的一个容器,在复杂的数据库中它能够很容易的管理各组对象.架构具有重要的安全功能.在这一篇你会学 ...

  3. Linux命令之-ps & kill

    1.ps:将某个进程显示出来: 常用命令 :ps -ef |grep Java 1)如下为加不加-e参数的区别 2.一般我们查找某个进程的目的就是把它杀掉,使用kill 命令. kill -9 564 ...

  4. 高程小tips

    1.DOM操作往往是JS程序中开销最大的部分,应尽量减少DOM操作.-P285  P297例子 2.元素的classList属性: 元素的classLis即该元素的class的值的集合,是一个列表(数 ...

  5. MySql cmd下的学习笔记 —— 有关建立数据库的操作(连接Mysql,建立数据库,删除数据库等等)

    (01) 连接数据库 mysql -uroot -p 之后输入密码 ******.(由于我的密码设置的是111,所以输入的是111) (02) 退出数据库 exit (03) 查看数据库 show d ...

  6. wait/notify实现线程间的通信

    使线程之间进行通信之后,系统间的交互性更加强大,在大大提高CPU利用率的同时还会使程序对各线程任务在处理的过程中进行有效的把控与监督. 1.不使用wait/notify实现线程间通信 使用sleep( ...

  7. 基于FATFS的磁盘分布

    1.前言 本文主要采用FAT32文件系统的磁盘各个部分是如何划分的 2. 磁盘分布总图 如包含两个分区的磁盘整体分布如下: 图 带有两个分区的磁盘分布 2.1 MBR 图  MBR的高层视图 主引导记 ...

  8. Linux中目录proc/net/dev详解【转】

    转自:https://blog.csdn.net/yzy1103203312/article/details/77848192 版权声明:本文为博主原创文章,未经博主允许不得转载. https://b ...

  9. 广联达 BIM5D 云平台---《建筑信息模型标准》解读

    广联达 BIM5D 云平台: 1.用户管理:  https://account.glodon.com/info 2.模型使用:  http://bim5d-hunan.glodon.com/api/v ...

  10. HAProxy详解(三):基于虚拟主机的HAProxy负载均衡系统配置实例【转】

    一.基于虚拟主机的HAProxy负载均衡系统配置实例 1.通过HAProxy的ACL规则配置虚拟主机: 下面将通过HAProxy的ACL功能配置一套基于虚拟主机的负载均衡系统.这里操作系统环境为:Ce ...