作者 | 李秋键

责编 | Carol

封图 | CSDN 下载自视觉中国

很多人学习python,不知道从何学起。
很多人学习python,掌握了基本语法过后,不知道在哪里寻找案例上手。
很多已经做案例的人,却不知道如何去学习更加高深的知识。
那么针对这三类人,我给大家提供一个好的学习平台,免费领取视频教程,电子书籍,以及课程的源代码!
QQ群:1097524789

近几年来GAN图像生成应用越来越广泛,其中主要得益于GAN 在博弈下不断提高建模能力,最终实现以假乱真的图像生成。 GAN 由两个神经网络组成,一个生成器和一个判别器组成,其中生成器试图产生欺骗判别器的真实样本,而判别器试图区分真实样本和生成样本。这种对抗博弈下使得生成器和判别器不断提高性能,在达到纳什平衡后生成器可以实现以假乱真的输出。

其中GAN 在图像生成应用最为突出,当然在计算机视觉中还有许多其他应用,如图像绘画,图像标注,物体检测和语义分割。在自然语言处理中应用 GAN 的研究也是一种增长趋势, 如文本建模,对话生成,问答和机器翻译。 然而,在 NLP 任务中训练 GAN 更加困难并且需要更多技术,这也使其成为具有挑战性但有趣的研究领域。

而今天我们就将利用CC-GAN训练将侧脸生成正脸的模型,其中迭代20次结果如下:

实验前的准备

首先我们使用的python版本是3.6.5所用到的模块如下:tensorflow用来模型训练和网络层建立;numpy模块用来处理矩阵运算;OpenCV用来读取图片和图像处理;os模块用来读取数据集等本地文件操作。

素材准备

其中准备训练的不同角度人脸图片放入以下文件夹作为训练集,如下图可见:

测试集图片如下可见:

模型搭建

原始GAN(GAN 简介与代码实战)在理论上可以完全逼近真实数据,但它的可控性不强(生成小图片还行,生成的大图片可能是不合逻辑的),因此需要对gan加一些约束,能生成我们想要的图片,这个时候,CGAN就横空出世了。其中CCGAN整体模型结构如下:

1、网络结构参数的搭建:

首先是定义标准化、激活函数和池化层等函数:Batch_Norm是对其进行规整,是为了防止同一个batch间的梯度相互抵消。其将不同batch规整到同一个均值0和方差1。InstanceNorm是将输入在深度方向上减去均值除以标准差,可以加快网络的训练速度。

  1. def instance_norm(x, scope='instance_norm'):
  2. return tf_contrib.layers.instance_norm(x, epsilon=1e-05, center=True, scale=True, scope=scope)
  3. def batch_norm(x, scope='batch_norm'):
  4. return tf_contrib.layers.batch_norm(x, decay=0.9, epsilon=1e-05, center=True, scale=True, scope=scope)
  5. def flatten(x) :
  6. return tf.layers.flatten(x)
  7. def lrelu(x, alpha=0.2):
  8. return tf.nn.leaky_relu(x, alpha)
  9. def relu(x):
  10. return tf.nn.relu(x)
  11. def global_avg_pooling(x):
  12. gap = tf.reduce_mean(x, axis=[1, 2], keepdims=True)
  13. return gap
  14. def resblock(x_init, c, scope='resblock'):
  15. with tf.variable_scope(scope):
  16. with tf.variable_scope('res1'):
  17. x = slim.conv2d(x_init, c, kernel_size=[3,3], stride=1, activation_fn = None)
  18. x = batch_norm(x)
  19. x = relu(x)
  20. with tf.variable_scope('res2'):
  21. x = slim.conv2d(x, c, kernel_size=[3,3], stride=1, activation_fn = None)
  22. x = batch_norm(x)
  23. return x + x_init

然后是卷积层的定义:

  1. def conv(x, c):
  2. x1 = slim.conv2d(x, c, kernel_size=[5,5], stride=2, padding = 'SAME', activation_fn=relu)
  3. # print(x1.shape)
  4. x2 = slim.conv2d(x, c, kernel_size=[3,3], stride=2, padding = 'SAME', activation_fn=relu)
  5. # print(x2.shape)
  6. x3 = slim.conv2d(x, c, kernel_size=[1,1], stride=2, padding = 'SAME', activation_fn=relu)
  7. # print(x3.shape)
  8. out = tf.concat([x1, x2, x3],axis = 3)
  9. out = slim.conv2d(out, c, kernel_size=[1,1], stride=1, padding = 'SAME', activation_fn=None)
  10. # print(out.shape)
  11. return out

生成器函数定义:

  1. def mixgenerator(x_init, c, org_pose, trg_pose):
  2. reuse = len([t for t in tf.global_variables() if t.name.startswith('generator')]) > 0
  3. with tf.variable_scope('generator', reuse = reuse):
  4. org_pose = tf.cast(tf.reshape(org_pose, shape=[-1, 1, 1, org_pose.shape[-1]]), tf.float32)
  5. print(org_pose.shape)
  6. org_pose = tf.tile(org_pose, [1, x_init.shape[1], x_init.shape[2], 1])
  7. print(org_pose.shape)
  8. x = tf.concat([x_init, org_pose], axis=-1)
  9. print(x.shape)
  10. x = conv(x, c)
  11. x = batch_norm(x, scope='bat_norm_1')
  12. x = relu(x)#64
  13. print('----------------')
  14. print(x.shape)
  15. x = conv(x, c*2)
  16. x = batch_norm(x, scope='bat_norm_2')
  17. x = relu(x)#32
  18. print(x.shape)
  19. x = conv(x, c*4)
  20. x = batch_norm(x, scope='bat_norm_3')
  21. x = relu(x)#16
  22. print(x.shape)
  23. f_org = x
  24. x = conv(x, c*8)
  25. x = batch_norm(x, scope='bat_norm_4')
  26. x = relu(x)#8
  27. print(x.shape)
  28. x = conv(x, c*8)
  29. x = batch_norm(x, scope='bat_norm_5')
  30. x = relu(x)#4
  31. print(x.shape)
  32. for i in range(6):
  33. x = resblock(x, c*8, scope = str(i)+"_resblock")
  34. trg_pose = tf.cast(tf.reshape(trg_pose, shape=[-1, 1, 1, trg_pose.shape[-1]]), tf.float32)
  35. print(trg_pose.shape)
  36. trg_pose = tf.tile(trg_pose, [1, x.shape[1], x.shape[2], 1])
  37. print(trg_pose.shape)
  38. x = tf.concat([x, trg_pose], axis=-1)
  39. print(x.shape)
  40. x = slim.conv2d_transpose(x, c*8, kernel_size=[3, 3], stride=2, activation_fn=None)
  41. x = batch_norm(x, scope='bat_norm_8')
  42. x = relu(x)#8
  43. print(x.shape)
  44. x = slim.conv2d_transpose(x, c*4, kernel_size=[3, 3], stride=2, activation_fn=None)
  45. x = batch_norm(x, scope='bat_norm_9')
  46. x = relu(x)#16
  47. print(x.shape)
  48. f_trg =x
  49. x = slim.conv2d_transpose(x, c*2, kernel_size=[3, 3], stride=2, activation_fn=None)
  50. x = batch_norm(x, scope='bat_norm_10')
  51. x = relu(x)#32
  52. print(x.shape)
  53. x = slim.conv2d_transpose(x, c, kernel_size=[3, 3], stride=2, activation_fn=None)
  54. x = batch_norm(x, scope='bat_norm_11')
  55. x = relu(x)#64
  56. print(x.shape)
  57. z = slim.conv2d_transpose(x, 3 , kernel_size=[3,3], stride=2, activation_fn = tf.nn.tanh)
  58. f = tf.concat([f_org, f_trg], axis=-1)
  59. print(f.shape)
  60. return z, f

下面还有判别器等函数定义,不加赘述。

2、VGG程序设立:

VGG模型网络层的搭建:

  1. def build(self, rgb, include_fc=False):
  2. """
  3. load variable from npy to build the VGG
  4. input format: bgr image with shape [batch_size, h, w, 3]
  5. scale: (-1, 1)
  6. """
  7. start_time = time.time()
  8. rgb_scaled = (rgb + 1) / 2 # [-1, 1] ~ [0, 1]
  9. # blue, green, red = tf.split(axis=3, num_or_size_splits=3, value=rgb_scaled)
  10. # bgr = tf.concat(axis=3, values=[blue - VGG_MEAN[0],
  11. # green - VGG_MEAN[1],
  12. # red - VGG_MEAN[2]])
  13. self.conv1_1 = self.conv_layer(rgb_scaled, "conv1_1")
  14. self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
  15. self.pool1 = self.max_pool(self.conv1_2, 'pool1')
  16. self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
  17. self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
  18. self.pool2 = self.max_pool(self.conv2_2, 'pool2')
  19. self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
  20. self.conv3_2_no_activation = self.no_activation_conv_layer(self.conv3_1, "conv3_2")
  21. self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
  22. self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
  23. self.conv3_4 = self.conv_layer(self.conv3_3, "conv3_4")
  24. self.pool3 = self.max_pool(self.conv3_4, 'pool3')
  25. self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
  26. self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
  27. self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
  28. self.conv4_4_no_activation = self.no_activation_conv_layer(self.conv4_3, "conv4_4")
  29. self.conv4_4 = self.conv_layer(self.conv4_3, "conv4_4")
  30. self.pool4 = self.max_pool(self.conv4_4, 'pool4')
  31. self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
  32. self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
  33. self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
  34. self.conv5_4_no_activation = self.no_activation_conv_layer(self.conv5_3, "conv5_4")
  35. self.conv5_4 = self.conv_layer(self.conv5_3, "conv5_4")
  36. self.pool5 = self.max_pool(self.conv5_4, 'pool5')
  37. if include_fc:
  38. self.fc6 = self.fc_layer(self.pool5, "fc6")
  39. assert self.fc6.get_shape().as_list()[1:] == [4096]
  40. self.relu6 = tf.nn.relu(self.fc6)
  41. self.fc7 = self.fc_layer(self.relu6, "fc7")
  42. self.relu7 = tf.nn.relu(self.fc7)
  43. self.fc8 = self.fc_layer(self.relu7, "fc8")
  44. self.prob = tf.nn.softmax(self.fc8, name="prob")
  45. self.data_dict = None
  46. print(("Finished building vgg19: %ds" % (time.time() - start_time)))

池化层、卷积层函数的定义:

  1. def avg_pool(self, bottom, name):
  2. return tf.nn.avg_pool(bottom, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name=name)
  3. def max_pool(self, bottom, name):
  4. return tf.nn.max_pool(bottom, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name=name)
  5. def conv_layer(self, bottom, name):
  6. with tf.variable_scope(name):
  7. filt = self.get_conv_filter(name)
  8. conv = tf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME')
  9. conv_biases = self.get_bias(name)
  10. bias = tf.nn.bias_add(conv, conv_biases)
  11. relu = tf.nn.relu(bias)
  12. return relu
  13. def no_activation_conv_layer(self, bottom, name):
  14. with tf.variable_scope(name):
  15. filt = self.get_conv_filter(name)
  16. conv = tf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME')
  17. conv_biases = self.get_bias(name)
  18. x = tf.nn.bias_add(conv, conv_biases)
  19. return x
  20. def fc_layer(self, bottom, name):
  21. with tf.variable_scope(name):
  22. shape = bottom.get_shape().as_list()
  23. dim = 1
  24. for d in shape[1:]:
  25. dim *= d
  26. x = tf.reshape(bottom, [-1, dim])
  27. weights = self.get_fc_weight(name)
  28. biases = self.get_bias(name)
  29. # Fully connected layer. Note that the '+' operation automatically
  30. # broadcasts the biases.
  31. fc = tf.nn.bias_add(tf.matmul(x, weights), biases)
  32. return fc
  33. def get_conv_filter(self, name):
  34. return tf.constant(self.data_dict[name][0], name="filter")
  35. def get_bias(self, name):
  36. return tf.constant(self.data_dict[name][1], name="biases")
  37. def get_fc_weight(self, name):
  38. return tf.constant(self.data_dict[name][0], name="weights")

模型的训练

设置GPU加速训练,需要配置好CUDA环境,并按照tensorflow-gpu版本。

  1. os.environ["CUDA_VISIBLE_DEVICES"] = "0"
  2. config = tf.ConfigProto()
  3. config.gpu_options.allow_growth = True
  4. tf.reset_default_graph()
  5. model = Sequential() #创建一个神经网络对象
  6. #添加一个卷积层,传入固定宽高三通道的
  7. 数据集读取和训练批次的划分:
  8. imagedir = './data/'
  9. img_label_org, label_trg, img = reader.images_list(imagedir)
  10. epoch = 800
  11. batch_size = 10
  12. total_sample_num = len(img_label_org)
  13. if total_sample_num % batch_size == 0:
  14. n_batch = int(total_sample_num / batch_size)
  15. else:
  16. n_batch = int(total_sample_num / batch_size) + 1

输入输出神经元和判别器等初始化:

  1. org_image = tf.placeholder(tf.float32,[None,128,128,3], name='org_image')
  2. trg_image = tf.placeholder(tf.float32,[None,128,128,3], name='trg_image')
  3. org_pose = tf.placeholder(tf.float32,[None,9], name='org_pose')
  4. trg_pose = tf.placeholder(tf.float32,[None,9], name='trg_pose')
  5. gen_trg, feat = model.mixgenerator(org_image, 32, org_pose, trg_pose)
  6. out_trg = model.generator(feat, 32, trg_pose)
  7.  
  8. #D_ab
  9. D_r, real_logit, real_pose = model.snpixdiscriminator(trg_image)
  10. D_f, fake_logit, fake_pose = model.snpixdiscriminator(gen_trg)
  11. D_f_, fake_logit_, fake_pose_ = model.snpixdiscriminator(out_trg)
  12. # fake or real D_LOSS
  13. loss_pred_r = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=real_logit, labels=tf.ones_like(D_r)))
  14. loss_pred_f = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=fake_logit_, labels=tf.zeros_like(D_f_)))
  15. loss_d_pred = loss_pred_r + loss_pred_f
  16. #pose loss
  17. loss_d_pose = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=real_pose, labels=trg_pose))
  18. loss_g_pose_ = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=fake_pose_, labels=trg_pose))
  19. loss_g_pose = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=fake_pose, labels=trg_pose))
  20. #G_LOSS
  21. loss_g_pred = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=fake_logit_, labels=tf.ones_like(D_f_)))
  22. out_pix_loss = ops.L2_loss(out_trg, trg_image)
  23. out_pre_loss, out_feat_texture = ops.vgg_loss(out_trg, trg_image)
  24. out_loss_texture = ops.texture_loss(out_feat_texture)
  25. out_loss_tv = 0.0002 * tf.reduce_mean(ops.tv_loss(out_trg))
  26. gen_pix_loss = ops.L2_loss(gen_trg, trg_image)
  27. out_g_loss = 100*gen_pix_loss + 100*out_pix_loss + loss_g_pred + out_pre_loss + out_loss_texture + out_loss_tv + loss_g_pose_
  28. gen_g_loss = 100 * gen_pix_loss + loss_g_pose
  29. # d_loss
  30. disc_loss = loss_d_pred + loss_d_pose
  31. out_global_step = tf.Variable(0, trainable=False)
  32. gen_global_step = tf.Variable(0, trainable=False)
  33. disc_global_step = tf.Variable(0, trainable=False)
  34. start_decay_step = 500000
  35. start_learning_rate = 0.0001
  36. decay_steps = 500000
  37. end_learning_rate = 0.0
  38. out_lr = (tf.where(tf.greater_equal(out_global_step, start_decay_step), tf.train.polynomial_decay(start_learning_rate, out_global_step-start_decay_step, decay_steps, end_learning_rate, power=1.0),start_learning_rate))
  39. gen_lr = (tf.where(tf.greater_equal(gen_global_step, start_decay_step), tf.train.polynomial_decay(start_learning_rate, gen_global_step-start_decay_step, decay_steps, end_learning_rate, power=1.0),start_learning_rate))
  40. disc_lr = (tf.where(tf.greater_equal(disc_global_step, start_decay_step), tf.train.polynomial_decay(start_learning_rate, disc_global_step-start_decay_step, decay_steps, end_learning_rate, power=1.0),start_learning_rate))
  41. t_vars = tf.trainable_variables()
  42. g_gen_vars = [var for var in t_vars if 'generator' in var.name]
  43. g_out_vars = [var for var in t_vars if 'generator_1' in var.name]
  44. d_vars = [var for var in t_vars if 'discriminator' in var.name]
  45. train_gen = tf.train.AdamOptimizer(gen_lr, beta1=0.5, beta2=0.999).minimize(gen_g_loss, var_list = g_gen_vars, global_step = gen_global_step)
  46. train_out = tf.train.AdamOptimizer(out_lr, beta1=0.5, beta2=0.999).minimize(out_g_loss, var_list = g_out_vars, global_step = out_global_step)
  47. train_disc = tf.train.AdamOptimizer(disc_lr, beta1=0.5, beta2=0.999).minimize(disc_loss, var_list = d_vars, global_step = disc_global_step)
  48. saver = tf.train.Saver(tf.global_variables())

模型训练、图片生成和模型的保存:

  1. with tf.Session(config=config) as sess:
  2. for d in ['/gpu:0']:
  3. with tf.device(d):
  4. ckpt = tf.train.get_checkpoint_state('./models/')
  5. if ckpt and tf.train.checkpoint_exists(ckpt.model_checkpoint_path):
  6. saver.restore(sess, ckpt.model_checkpoint_path)
  7. print('Import models successful!')
  8. else:
  9. sess.run(tf.global_variables_initializer())
  10. print('Initialize successful!')
  11. for i in range(epoch):
  12. random.shuffle(img_label_org)
  13. random.shuffle(label_trg)
  14. for j in range(n_batch):
  15. if j == n_batch - 1:
  16. n = total_sample_num
  17. else:
  18. n = j * batch_size + batch_size
  19. img_org_output, img_trg_output, label_org_output, label_trg_output, image_name_output = reader.images_read(img_label_org[j*batch_size:n], label_trg[j*batch_size:n], img, imagedir)
  20. feeds = {org_image:img_org_output, trg_image:img_trg_output, org_pose:label_org_output,trg_pose:label_trg_output}
  21. if i < 400:
  22. sess.run(train_disc, feed_dict=feeds)
  23. sess.run(train_gen, feed_dict=feeds)
  24. sess.run(train_out, feed_dict=feeds)
  25. else:
  26. sess.run(train_gen, feed_dict=feeds)
  27. sess.run(train_out, feed_dict=feeds)
  28. if j%10==0:
  29. sess.run(train_disc, feed_dict=feeds)
  30. if j%2==0:
  31. gen_g_loss_,out_g_loss_, disc_loss_, org_image_, gen_trg_, out_trg_, trg_image_ = sess.run([gen_g_loss, out_g_loss, disc_loss, org_image, gen_trg, out_trg, trg_image],feeds)
  32. print("epoch:", i, "iter:", j, "gen_g_loss_:", gen_g_loss_, "out_g_loss_:", out_g_loss_, "loss_disc:", disc_loss_)
  33. for n in range(batch_size):
  34. org_image_output = (org_image_[n] + 1)*127.5
  35. gen_trg_output = (gen_trg_[n] + 1)*127.5
  36. out_trg_output = (out_trg_[n] + 1)*127.5
  37. trg_image_output = (trg_image_[n] + 1)*127.5
  38. temp = np.concatenate([org_image_output, gen_trg_output, out_trg_output, trg_image_output], 1)
  39. cv.imwrite("./record/%d_%d_%d_image.jpg" %(i, j, n), temp)
  40. if i%10==0 or i==epoch-1:
  41. saver.save(sess, './models/wssGAN.ckpt', global_step=gen_global_step)
  42. print("Finish!")

最终运行程序结果如下:

初始训练一次结果:

训练20次结果:

经过对比,可以发现有明显的提升!

源码地址:

https://pan.baidu.com/s/1cpRJlk7yUwhYJSIkRpbNpg

提取码:kdxe

用 Python 可以实现侧脸转正脸?我也要试一下!的更多相关文章

  1. Python初探-Pycharm,Anaconda-人脸识别

    版权声明:博客版权所有,转载注明出处. https://blog.csdn.net/qq_33083551/article/details/82253026 1.建议先安装Anaconda,再安装Py ...

  2. 深入理解 Python 异步编程(上)

    http://python.jobbole.com/88291/ 前言 很多朋友对异步编程都处于"听说很强大"的认知状态.鲜有在生产项目中使用它.而使用它的同学,则大多数都停留在知 ...

  3. 25 行 Python 代码实现人脸识别——OpenCV 技术教程

    OpenCV OpenCV 是最流行的计算机视觉库,原本用 C 和 C++ 开发,现在也支持 Python. 它使用机器学习算法在图像中搜索人的面部.对于人脸这么复杂的东西,并没有一个简单的检测能对是 ...

  4. 25行 Python 代码实现人脸检测——OpenCV 技术教程

    这是篇是利用 OpenCV 进行人脸识别的技术讲解.阅读本文之前,这是注意事项: 建议先读一遍本文再跑代码——你需要理解这些代码是干什么的.成功跑一遍不是目的,能够举一反三.在新任务上找出 bug 才 ...

  5. 用Python在25行以下代码实现人脸识别

    在本文中,我们将看到一种使用Python和开放源码库开始人脸识别的非常简单的方法. OpenCV OpenCV是最流行的计算机视觉库.最初是用C/C++编写的,现在它提供了Python的API. Op ...

  6. 可以用 Python 编程语言做哪些神奇好玩的事情?除了生孩子不能,其他全都行!

    坦克大战 源自于一个用Python写各种小游戏的github合集,star数1k.除了坦克大战外,还包含滑雪者.皮卡丘GOGO.贪吃蛇.推箱子.拼图等游戏. 图片转铅笔画 帮助你快速生成属于自己的铅笔 ...

  7. 【Python文件处理】递归批处理文件夹子目录内所有txt数据

    因为有个需求,需要处理文件夹内所有txt文件,将txt里面的数据筛选,重新存储. 虽然手工可以做,但想到了python一直主张的是自动化测试,就想试着写一个自动化处理数据的程序. 一.分析数据格式 需 ...

  8. Python正则表达式汇总

    判断是否是整数或小数,在网上看到一个方法: type(eval(")) == int type(eval("123.23")) == float 后来又看到<Pyt ...

  9. Python: import vs from (module) import function(class) 的理解

    Python: Import vs From (module) import function(class) 本文涉及的 Python 基本概念: Module Class import from . ...

随机推荐

  1. lottery+web2

    lottery 题目分析 题目给了一个彩票网站,经过页面的探索,没有发现明显漏洞,进行目录扫描,发现该站存在.git文件 猜测存在源码泄露,使用githack利用: 获得网页源码,进行源码分析 源码审 ...

  2. 前端06 /JavaScript之BOM、DOM

    前端06 /JavaScript 目录 前端06 /JavaScript 昨日内容回顾 js的引入 js的编程要求 变量 输入输出 基础数据类型 number string boolean null ...

  3. 全栈的自我修养: 003Axios 的简单使用

    全栈的自我修养: Axios 的简单使用 You should never judge something you don't understand. 你不应该去评判你不了解的事物. 全栈的自我修养: ...

  4. mysql中常见约束

    #常见约束 /* 含义:一种限制,用于限制表中的数据,为了保证表中的数据的准确和可靠性 分类:六大约束 NOT NULL:非空,用于保证该字段的值不能为空 比如姓名.学号等 DEFAULT:默认,用于 ...

  5. Appium+Python3环境搭建,其实超简单!【软件测试教程】

    appium可以说是做app最火的一个自动化框架,它的主要优势是支持android和ios,另外脚本语言也是支持java和Python.略懂Python,所以接下来的教程是appium+python, ...

  6. DEP(Data Execution Prevention) 数据执行保护

    1.原理 数据执行保护,简称“DEP”,英文全称为“Data Execution Prevention”,是一组在存储器上运行额外检查的硬件和软件技术,有助于防止恶意程序码在系统上运行. 此技术由Mi ...

  7. Spring Boot 2.x基础教程:使用EhCache缓存集群

    上一篇我们介绍了在Spring Boot中整合EhCache的方法.既然用了ehcache,我们自然要说说它的一些高级功能,不然我们用默认的ConcurrentHashMap就好了.本篇不具体介绍Eh ...

  8. P.SDA1.DEV - 一个没有服务器的图床

    图床特色 P.SDA1.DEV的愿景是为大家提供一个免费.长期稳定外链分享图片的选择. P.SDA1.DEV的主要特点有: 完全建构在Serverless云服务上,致力于提供(墙外)可用性99.9%的 ...

  9. 3.OSPF协议及链路状态算法

    OSPF的特点: 1.使用洪泛法向自治系统内所有路由器发送信息,即路由器通过输出端口向所有相邻的路由器发送信息,而每一个相邻路由器又再次将此信息发往其所有的相邻路由器.最终整个区域内所有路由器都得到了 ...

  10. [spring] -- AOP、IOC、DI篇

    AOP(面向切面编程) 怎么理解这个切面编程的概念及优点? 概念: 横切关注点与业务逻辑相分离,切面能帮助我们模块化横切关注点: 优点: 现在每个关注点都集中于一个地方,而不是分散到多处代码中: 服务 ...