今天这篇博文针对Assignment3的全连接网络作业,对前面学习的内容进行一些总结

在前面的作业中我们建立神经网络的操作比较简单,也不具有模块化的特征,在A3作业中,引导我们对前面的比如linear layer,Relu layer,Loss layer以及dropout layer(这个前面课程内容未涉及 但是在cs231n中有出现),以及梯度下降不同方法(SGD,SGD+Momentum,RMSprop,Adam)等等进行模块化的实现

Linear与Relu单层实现

class Linear(object):

  @staticmethod
def forward(x, w, b):
"""
Computes the forward pass for an linear (fully-connected) layer.
The input x has shape (N, d_1, ..., d_k) and contains a minibatch of N
examples, where each example x[i] has shape (d_1, ..., d_k). We will
reshape each input into a vector of dimension D = d_1 * ... * d_k, and
then transform it to an output vector of dimension M.
Inputs:
- x: A tensor containing input data, of shape (N, d_1, ..., d_k)
- w: A tensor of weights, of shape (D, M)
- b: A tensor of biases, of shape (M,)
Returns a tuple of:
- out: output, of shape (N, M)
- cache: (x, w, b)
"""
out = None
out = x.view(x.shape[0],-1).mm(w)+b
cache = (x, w, b)
return out, cache @staticmethod
def backward(dout, cache):
"""
Computes the backward pass for an linear layer.
Inputs:
- dout: Upstream derivative, of shape (N, M)
- cache: Tuple of:
- x: Input data, of shape (N, d_1, ... d_k)
- w: Weights, of shape (D, M)
- b: Biases, of shape (M,)
Returns a tuple of:
- dx: Gradient with respect to x, of shape (N, d1, ..., d_k)
- dw: Gradient with respect to w, of shape (D, M)
- db: Gradient with respect to b, of shape (M,)
"""
x, w, b = cache
dx, dw, db = None, None, None
db = dout.sum(dim = 0)
dx = dout.mm(w.t()).view(x.shape)
dw = x.view(x.shape[0],-1).t().mm(dout)
return dx, dw, db class ReLU(object): @staticmethod
def forward(x):
"""
Computes the forward pass for a layer of rectified linear units (ReLUs).
Input:
- x: Input; a tensor of any shape
Returns a tuple of:
- out: Output, a tensor of the same shape as x
- cache: x
"""
out = None
out = x.clone()
out[out<0] = 0
cache = x
return out, cache @staticmethod
def backward(dout, cache):
"""
Computes the backward pass for a layer of rectified linear units (ReLUs).
Input:
- dout: Upstream derivatives, of any shape
- cache: Input x, of same shape as dout
Returns:
- dx: Gradient with respect to x
"""
dx, x = None, cache
dx = dout.clone()
dx[x<0] = 0
return dx class Linear_ReLU(object): @staticmethod
def forward(x, w, b):
"""
Convenience layer that performs an linear transform followed by a ReLU. Inputs:
- x: Input to the linear layer
- w, b: Weights for the linear layer
Returns a tuple of:
- out: Output from the ReLU
- cache: Object to give to the backward pass
"""
a, fc_cache = Linear.forward(x, w, b)
out, relu_cache = ReLU.forward(a)
cache = (fc_cache, relu_cache)
return out, cache @staticmethod
def backward(dout, cache):
"""
Backward pass for the linear-relu convenience layer
"""
fc_cache, relu_cache = cache
da = ReLU.backward(dout, relu_cache)
dx, dw, db = Linear.backward(da, fc_cache)
return dx, dw, db

从上面的代码我们可以看到,针对linear与relu层,我们可以将前向传播与反向传播分开实现,具体过程在上一篇我的博文中有讨论:https://www.cnblogs.com/dyccyber/p/17764347.html

不同的是我们要对x进行一个reshape,将其转换为N*D的矩阵,才能与矩阵进行点积

在分别实现了linear与relu之后,因为神经网络的架构往往是在linear之后立马加入一个relu层,所以我们可以再建立一个linear-relu class,将这两个层的前向与反向传播合并

LossLayer实现

def svm_loss(x, y):
"""
Computes the loss and gradient using for multiclass SVM classification.
Inputs:
- x: Input data, of shape (N, C) where x[i, j] is the score for the jth
class for the ith input.
- y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and
0 <= y[i] < C
Returns a tuple of:
- loss: Scalar giving the loss
- dx: Gradient of the loss with respect to x
"""
N = x.shape[0]
correct_class_scores = x[torch.arange(N), y]
margins = (x - correct_class_scores[:, None] + 1.0).clamp(min=0.)
margins[torch.arange(N), y] = 0.
loss = margins.sum() / N
num_pos = (margins > 0).sum(dim=1)
dx = torch.zeros_like(x)
dx[margins > 0] = 1.
dx[torch.arange(N), y] -= num_pos.to(dx.dtype)
dx /= N
return loss, dx def softmax_loss(x, y):
"""
Computes the loss and gradient for softmax classification.
Inputs:
- x: Input data, of shape (N, C) where x[i, j] is the score for the jth
class for the ith input.
- y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and
0 <= y[i] < C
Returns a tuple of:
- loss: Scalar giving the loss
- dx: Gradient of the loss with respect to x
"""
shifted_logits = x - x.max(dim=1, keepdim=True).values
Z = shifted_logits.exp().sum(dim=1, keepdim=True)
log_probs = shifted_logits - Z.log()
probs = log_probs.exp()
N = x.shape[0]
loss = (-1.0/ N) * log_probs[torch.arange(N), y].sum()
dx = probs.clone()
dx[torch.arange(N), y] -= 1
dx /= N
return loss, dx

上面损失函数层我们在之前已经实现过,具体实现需要用到一些矩阵微分的知识,具体可以参考这两篇博文:

http://giantpandacv.com/academic/算法科普/深度学习基础/SVM Loss以及梯度推导/

https://blog.csdn.net/qq_27261889/article/details/82915598

多层神经网络

关于多层神经网络,首先是类的初始化定义,我们可以看神经网络的结构{linear - relu - [dropout]} x (L - 1) - linear - softmax,有L-1个linear层与relu层与dropout层的组合,最后再以linear-softmax的结构结束输出结果,初始化我们要遍历每个隐藏层,初始化权重矩阵与偏置项,最后再去初始化最后一个linear层,要注意矩阵的维度

class FullyConnectedNet(object):
"""
A fully-connected neural network with an arbitrary number of hidden layers,
ReLU nonlinearities, and a softmax loss function.
For a network with L layers, the architecture will be: {linear - relu - [dropout]} x (L - 1) - linear - softmax where dropout is optional, and the {...} block is repeated L - 1 times. Similar to the TwoLayerNet above, learnable parameters are stored in the
self.params dictionary and will be learned using the Solver class.
""" def __init__(self, hidden_dims, input_dim=3*32*32, num_classes=10,
dropout=0.0, reg=0.0, weight_scale=1e-2, seed=None,
dtype=torch.float, device='cpu'):
"""
Initialize a new FullyConnectedNet. Inputs:
- hidden_dims: A list of integers giving the size of each hidden layer.
- input_dim: An integer giving the size of the input.
- num_classes: An integer giving the number of classes to classify.
- dropout: Scalar between 0 and 1 giving the drop probability for networks
with dropout. If dropout=0 then the network should not use dropout.
- reg: Scalar giving L2 regularization strength.
- weight_scale: Scalar giving the standard deviation for random
initialization of the weights.
- seed: If not None, then pass this random seed to the dropout layers. This
will make the dropout layers deteriminstic so we can gradient check the
model.
- dtype: A torch data type object; all computations will be performed using
this datatype. float is faster but less accurate, so you should use
double for numeric gradient checking.
- device: device to use for computation. 'cpu' or 'cuda'
"""
self.use_dropout = dropout != 0
self.reg = reg
self.num_layers = 1 + len(hidden_dims)
self.dtype = dtype
self.params = {} ############################################################################
# TODO: Initialize the parameters of the network, storing all values in #
# the self.params dictionary. Store weights and biases for the first layer #
# in W1 and b1; for the second layer use W2 and b2, etc. Weights should be #
# initialized from a normal distribution centered at 0 with standard #
# deviation equal to weight_scale. Biases should be initialized to zero. #
############################################################################
# Replace "pass" statement with your code
last_dim = input_dim
for n ,hidden_dim in enumerate(hidden_dims):
i = n+1
self.params['W{}'.format(i)] = torch.zeros(last_dim, hidden_dim, dtype=dtype,device = device)
self.params['W{}'.format(i)] += weight_scale*torch.randn(last_dim, hidden_dim, dtype=dtype,device= device)
self.params['b{}'.format(i)] = torch.zeros(hidden_dim, dtype=dtype,device= device)
last_dim = hidden_dim
i+=1
self.params['W{}'.format(i)] = torch.zeros(last_dim, num_classes, dtype=dtype,device = device)
self.params['W{}'.format(i)] += weight_scale*torch.randn(last_dim, num_classes, dtype=dtype,device= device)
self.params['b{}'.format(i)] = torch.zeros(num_classes, dtype=dtype,device= device) # When using dropout we need to pass a dropout_param dictionary to each
# dropout layer so that the layer knows the dropout probability and the mode
# (train / test). You can pass the same dropout_param to each dropout layer.
self.dropout_param = {}
if self.use_dropout:
self.dropout_param = {'mode': 'train', 'p': dropout}
if seed is not None:
self.dropout_param['seed'] = seed

其次,我们可以定义save与load函数,对模型参数等等进行存储与加载:

def save(self, path):
checkpoint = {
'reg': self.reg,
'dtype': self.dtype,
'params': self.params,
'num_layers': self.num_layers,
'use_dropout': self.use_dropout,
'dropout_param': self.dropout_param,
} torch.save(checkpoint, path)
print("Saved in {}".format(path)) def load(self, path, dtype, device):
checkpoint = torch.load(path, map_location='cpu')
self.params = checkpoint['params']
self.dtype = dtype
self.reg = checkpoint['reg']
self.num_layers = checkpoint['num_layers']
self.use_dropout = checkpoint['use_dropout']
self.dropout_param = checkpoint['dropout_param'] for p in self.params:
self.params[p] = self.params[p].type(dtype).to(device) print("load checkpoint file: {}".format(path))

最后是前向传播与反向传播的实现,这里直接使用前面基础的linear与relu的前向与反向传播即可,注意一下神经网络的结构,不要把顺序搞错即可

def loss(self, X, y=None):
"""
Compute loss and gradient for the fully-connected net.
Input / output: Same as TwoLayerNet above.
"""
X = X.to(self.dtype)
mode = 'test' if y is None else 'train' # Set train/test mode for batchnorm params and dropout param since they
# behave differently during training and testing.
if self.use_dropout:
self.dropout_param['mode'] = mode
scores = None
############################################################################
# TODO: Implement the forward pass for the fully-connected net, computing #
# the class scores for X and storing them in the scores variable. #
# #
# When using dropout, you'll need to pass self.dropout_param to each #
# dropout forward pass. #
############################################################################
# Replace "pass" statement with your code
cache_dict = {}
last_out = X
for n in range(self.num_layers-1):
i=n+1
last_out, cache_dict['cache_LR{}'.format(i)] = Linear_ReLU.forward(last_out,self.params['W{}'.format(i)],self.params['b{}'.format(i)])
if self.use_dropout:
last_out, cache_dict['cache_Dropout{}'.format(i)] = Dropout.forward(last_out,self.dropout_param)
i+=1
last_out, cache_dict['cache_L{}'.format(i)] = Linear.forward(last_out,self.params['W{}'.format(i)],self.params['b{}'.format(i)])
scores = last_out # If test mode return early
if mode == 'test':
return scores loss, grads = 0.0, {}
############################################################################
# TODO: Implement the backward pass for the fully-connected net. Store the #
# loss in the loss variable and gradients in the grads dictionary. Compute #
# data loss using softmax, and make sure that grads[k] holds the gradients #
# for self.params[k]. Don't forget to add L2 regularization! #
# NOTE: To ensure that your implementation matches ours and you pass the #
# automated tests, make sure that your L2 regularization includes a factor #
# of 0.5 to simplify the expression for the gradient. #
############################################################################
# Replace "pass" statement with your code
loss, dout = softmax_loss(scores, y)
loss += (self.params['W{}'.format(i)]*self.params['W{}'.format(i)]).sum()*self.reg
last_dout, dw, db = Linear.backward(dout, cache_dict['cache_L{}'.format(i)])
grads['W{}'.format(i)] = dw + 2*self.params['W{}'.format(i)]*self.reg
grads['b{}'.format(i)] = db
for n in range(self.num_layers-1)[::-1]:
i = n +1
if self.use_dropout:
last_dout = Dropout.backward(last_dout, cache_dict['cache_Dropout{}'.format(i)])
last_dout, dw, db = Linear_ReLU.backward(last_dout, cache_dict['cache_LR{}'.format(i)])
grads['W{}'.format(i)] = dw + 2*self.params['W{}'.format(i)]*self.reg
grads['b{}'.format(i)] = db
loss += (self.params['W{}'.format(i)]*self.params['W{}'.format(i)]).sum()*self.reg
return loss, grads

不同梯度下降方法

SGD,SGD+Momentum,RMSprop,Adam(Momentum+RMSprop+bias)的实现

具体原理介绍可参考之前的一篇博文:https://www.cnblogs.com/dyccyber/p/17759697.html

这里特别提及一下在Adam中我们加入了偏置项,是为了防止在初期进行梯度下降的过程中,下降的过快

def sgd(w, dw, config=None):
"""
Performs vanilla stochastic gradient descent.
config format:
- learning_rate: Scalar learning rate.
"""
if config is None: config = {}
config.setdefault('learning_rate', 1e-2) w -= config['learning_rate'] * dw
return w, config def sgd_momentum(w, dw, config=None):
"""
Performs stochastic gradient descent with momentum.
config format:
- learning_rate: Scalar learning rate.
- momentum: Scalar between 0 and 1 giving the momentum value.
Setting momentum = 0 reduces to sgd.
- velocity: A numpy array of the same shape as w and dw used to store a
moving average of the gradients.
"""
if config is None: config = {}
config.setdefault('learning_rate', 1e-2)
config.setdefault('momentum', 0.9)
v = config.get('velocity', torch.zeros_like(w)) next_w = None
#############################################################################
# TODO: Implement the momentum update formula. Store the updated value in #
# the next_w variable. You should also use and update the velocity v. #
#############################################################################
# Replace "pass" statement with your code
v = config['momentum']*v - config['learning_rate'] * dw
next_w = w + v
#############################################################################
# END OF YOUR CODE #
#############################################################################
config['velocity'] = v return next_w, config def rmsprop(w, dw, config=None):
"""
Uses the RMSProp update rule, which uses a moving average of squared
gradient values to set adaptive per-parameter learning rates.
config format:
- learning_rate: Scalar learning rate.
- decay_rate: Scalar between 0 and 1 giving the decay rate for the squared
gradient cache.
- epsilon: Small scalar used for smoothing to avoid dividing by zero.
- cache: Moving average of second moments of gradients.
"""
if config is None: config = {}
config.setdefault('learning_rate', 1e-2)
config.setdefault('decay_rate', 0.99)
config.setdefault('epsilon', 1e-8)
config.setdefault('cache', torch.zeros_like(w)) next_w = None
###########################################################################
# TODO: Implement the RMSprop update formula, storing the next value of w #
# in the next_w variable. Don't forget to update cache value stored in #
# config['cache']. #
###########################################################################
# Replace "pass" statement with your code
config['cache'] = config['decay_rate'] * config['cache'] + (1 - config['decay_rate']) * dw**2
w += -config['learning_rate'] * dw / (torch.sqrt(config['cache']) + config['epsilon'])
next_w = w
###########################################################################
# END OF YOUR CODE #
########################################################################### return next_w, config def adam(w, dw, config=None):
"""
Uses the Adam update rule, which incorporates moving averages of both the
gradient and its square and a bias correction term.
config format:
- learning_rate: Scalar learning rate.
- beta1: Decay rate for moving average of first moment of gradient.
- beta2: Decay rate for moving average of second moment of gradient.
- epsilon: Small scalar used for smoothing to avoid dividing by zero.
- m: Moving average of gradient.
- v: Moving average of squared gradient.
- t: Iteration number.
"""
if config is None: config = {}
config.setdefault('learning_rate', 1e-3)
config.setdefault('beta1', 0.9)
config.setdefault('beta2', 0.999)
config.setdefault('epsilon', 1e-8)
config.setdefault('m', torch.zeros_like(w))
config.setdefault('v', torch.zeros_like(w))
config.setdefault('t', 0) next_w = None
#############################################################################
# TODO: Implement the Adam update formula, storing the next value of w in #
# the next_w variable. Don't forget to update the m, v, and t variables #
# stored in config. #
# #
# NOTE: In order to match the reference output, please modify t _before_ #
# using it in any calculations. #
#############################################################################
# Replace "pass" statement with your code
config['t'] += 1
config['m'] = config['beta1']*config['m'] + (1-config['beta1'])*dw
mt = config['m'] / (1-config['beta1']**config['t'])
config['v'] = config['beta2']*config['v'] + (1-config['beta2'])*(dw*dw)
vc = config['v'] / (1-(config['beta2']**config['t']))
w = w - (config['learning_rate'] * mt)/ (torch.sqrt(vc) + config['epsilon'])
next_w = w
#############################################################################
# END OF YOUR CODE #
############################################################################# return next_w, config

Dropout层

注意在前面多层全连接网络的实现中,dropout只有在我们进行train的时候才使用,在test的时候是不使用的

dropout层是一个非常高效与简单的正则化方法,具体来说,在训练时,dropout 是通过仅以一定概率 p 保持神经元活跃来实现的,如果我们设置的随机数小于p就将其设置为零,如下图所示:



用另一种视角去看,dropout实际上是一种对全神经网络进行抽样的方法,可以减少不同神经元之间复杂的关系

具体论文原文见:https://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf

代码实现:

class Dropout(object):

  @staticmethod
def forward(x, dropout_param):
"""
Performs the forward pass for (inverted) dropout.
Inputs:
- x: Input data: tensor of any shape
- dropout_param: A dictionary with the following keys:
- p: Dropout parameter. We *drop* each neuron output with probability p.
- mode: 'test' or 'train'. If the mode is train, then perform dropout;
if the mode is test, then just return the input.
- seed: Seed for the random number generator. Passing seed makes this
function deterministic, which is needed for gradient checking but not
in real networks.
Outputs:
- out: Tensor of the same shape as x.
- cache: tuple (dropout_param, mask). In training mode, mask is the dropout
mask that was used to multiply the input; in test mode, mask is None.
NOTE: Please implement **inverted** dropout, not the vanilla version of dropout.
See http://cs231n.github.io/neural-networks-2/#reg for more details.
NOTE 2: Keep in mind that p is the probability of **dropping** a neuron
output; this might be contrary to some sources, where it is referred to
as the probability of keeping a neuron output.
"""
p, mode = dropout_param['p'], dropout_param['mode']
if 'seed' in dropout_param:
torch.manual_seed(dropout_param['seed']) mask = None
out = None if mode == 'train':
###########################################################################
# TODO: Implement training phase forward pass for inverted dropout. #
# Store the dropout mask in the mask variable. #
###########################################################################
# Replace "pass" statement with your code
mask = torch.rand(x.shape) > p
out = x.clone()
out[mask] = 0
###########################################################################
# END OF YOUR CODE #
###########################################################################
elif mode == 'test':
###########################################################################
# TODO: Implement the test phase forward pass for inverted dropout. #
###########################################################################
# Replace "pass" statement with your code
out = x
cache = (dropout_param, mask) return out, cache @staticmethod
def backward(dout, cache):
"""
Perform the backward pass for (inverted) dropout.
Inputs:
- dout: Upstream derivatives, of any shape
- cache: (dropout_param, mask) from Dropout.forward.
"""
dropout_param, mask = cache
mode = dropout_param['mode'] dx = None
if mode == 'train':
###########################################################################
# TODO: Implement training phase backward pass for inverted dropout #
###########################################################################
# Replace "pass" statement with your code
dx = dout
dx[mask] = 0
elif mode == 'test':
dx = dout
return dx

umicv cv-summary1-全连接神经网络模块化实现的更多相关文章

  1. TensorFlow之DNN(二):全连接神经网络的加速技巧(Xavier初始化、Adam、Batch Norm、学习率衰减与梯度截断)

    在上一篇博客<TensorFlow之DNN(一):构建“裸机版”全连接神经网络>中,我整理了一个用TensorFlow实现的简单全连接神经网络模型,没有运用加速技巧(小批量梯度下降不算哦) ...

  2. TensorFlow之DNN(一):构建“裸机版”全连接神经网络

    博客断更了一周,干啥去了?想做个聊天机器人出来,去看教程了,然后大受打击,哭着回来补TensorFlow和自然语言处理的基础了.本来如意算盘打得挺响,作为一个初学者,直接看项目(不是指MINIST手写 ...

  3. MINIST深度学习识别:python全连接神经网络和pytorch LeNet CNN网络训练实现及比较(三)

    版权声明:本文为博主原创文章,欢迎转载,并请注明出处.联系方式:460356155@qq.com 在前两篇文章MINIST深度学习识别:python全连接神经网络和pytorch LeNet CNN网 ...

  4. tensorflow中使用mnist数据集训练全连接神经网络-学习笔记

    tensorflow中使用mnist数据集训练全连接神经网络 ——学习曹健老师“人工智能实践:tensorflow笔记”的学习笔记, 感谢曹老师 前期准备:mnist数据集下载,并存入data目录: ...

  5. 【TensorFlow/简单网络】MNIST数据集-softmax、全连接神经网络,卷积神经网络模型

    初学tensorflow,参考了以下几篇博客: soft模型 tensorflow构建全连接神经网络 tensorflow构建卷积神经网络 tensorflow构建卷积神经网络 tensorflow构 ...

  6. 深度学习tensorflow实战笔记(1)全连接神经网络(FCN)训练自己的数据(从txt文件中读取)

    1.准备数据 把数据放进txt文件中(数据量大的话,就写一段程序自己把数据自动的写入txt文件中,任何语言都能实现),数据之间用逗号隔开,最后一列标注数据的标签(用于分类),比如0,1.每一行表示一个 ...

  7. 如何使用numpy实现一个全连接神经网络?(上)

    全连接神经网络的概念我就不介绍了,对这个不是很了解的朋友,可以移步其他博主的关于神经网络的文章,这里只介绍我使用基本工具实现全连接神经网络的方法. 所用工具: numpy == 1.16.4 matp ...

  8. Tensorflow 多层全连接神经网络

    本节涉及: 身份证问题 单层网络的模型 多层全连接神经网络 激活函数 tanh 身份证问题新模型的代码实现 模型的优化 一.身份证问题 身份证号码是18位的数字[此处暂不考虑字母的情况],身份证倒数第 ...

  9. 基于MNIST数据集使用TensorFlow训练一个包含一个隐含层的全连接神经网络

    包含一个隐含层的全连接神经网络结构如下: 包含一个隐含层的神经网络结构图 以MNIST数据集为例,以上结构的神经网络训练如下: #coding=utf-8 from tensorflow.exampl ...

  10. Keras入门——(1)全连接神经网络FCN

    Anaconda安装Keras: conda install keras 安装完成: 在Jupyter Notebook中新建并执行代码: import keras from keras.datase ...

随机推荐

  1. 自己动手实现rpc框架(二) 实现集群间rpc通信

    自己动手实现rpc框架(二) 实现集群间rpc通信 1. 集群间rpc通信 上一篇博客中MyRpc框架实现了基本的点对点rpc通信功能.而在这篇博客中我们需要实现MyRpc的集群间rpc通信功能. 自 ...

  2. animation动画+关键帧实现轮播图效果(再次学习)!

    再次遇到要实现轮播图效果的时候,发现还是不怎么会,因为对js还没有熟练使用,只希望使用h5和css3实现效果 虽然之前已经学习了一遍了,但是还是不熟练,再次学习一下了 这次的可作为套板使用,无序列表为 ...

  3. .NetCore3.1+微服务架构技术栈

    目标 目标系统架构演变,单体-分布式-微服务-中台 微服务架构核心解决,横向对比1.0.2.0.3.0 践行微服务架构,全组件解读! 也谈中台 单体架构Monolithic 单体应用时代:应用程序就是 ...

  4. 前端Vue自定义开屏启动广告组件,点击广告图跳转广告详情

    随着技术的发展,开发的复杂度也越来越高,传统开发方式将一个系统做成了整块应用,经常出现的情况就是一个小小的改动或者一个小功能的增加可能会引起整体逻辑的修改,造成牵一发而动全身. 通过组件化开发,可以有 ...

  5. 使用 Dockerfile 构建生产环境镜像

    传统部署的坑: 1202 年了,如果你连 Docker 都不知道是什么,我建议买一本书看看--或者谷歌一下,博客已经写烂了. 为什么有这篇文章,是因为我在真正做容器化改造的时候,发现公司生产环境存在大 ...

  6. python安装后pip用不了 cmd命令窗口提示:Did not provide a command

    遇到的问题: 解决方法: 首先,使用where pip找到我的pip的安装目录 其次,配置环境变量 环境变量已经配置,但是仍是使用的时候直接输入pip提示"Did not provide a ...

  7. VuePress@next 使用数学公式插件

    VuePress@next 使用数学公式插件 搞了一个VuePress1.0的 现在升级了一下,但是使用数学公式的插件老报错啊!经过不懈努力,终于搞定了.现在记录一下. VuePress 介绍 Vue ...

  8. Confluence 挖矿病毒 升级现有系统

    Confluence 挖矿病毒 升级现有系统 背景 服务器很多服务都很卡,通过检查发现是一起运行的confluence异常,被挖矿病毒挖矿,华为云和官网也有说明. 知道问题之后,处理方式就是将现有的问 ...

  9. QPushButton按钮的使用

    1 import sys 2 from PyQt5.QtCore import * 3 from PyQt5.QtGui import * 4 from PyQt5.QtWidgets import ...

  10. 是时候丢掉BeanUtils了

    前言 为了更好的进行开发和维护,我们都会对程序进行分层设计,例如常见的三层,四层,每层各司其职,相互配合.也随着分层,出现了VO,BO,PO,DTO,每层都会处理自己的数据对象,然后向上传递,这就避免 ...