pytorch学习笔记(8)--现有模型的使用和修改
官网网址: https://pytorch.org/vision/0.9/models.html#semantic-segmentation
(1)、ImageNet
- train_data = torchvision.datasets.ImageNet("../dataset", split='train', transform=torchvision.transforms.ToTensor())
ImageNet数据集是一个计算机视觉数据集。 ImageNet数据集一直是评估图像分类算法性能的基准。 ImageNet 数据集是为了促进计算机图像识别技术的发展而设立的一个大型图像数据集。ImageNet详细介绍: https://blog.51cto.com/yunyaniu/5245552
(2) 、神经网络VGG16
- weights参数表示是否使用已训练的模型参数。
代码:
- # file : model_pretrained.py
- # time : 2022/8/5 下午4:19
- # function : VGG16
- import torchvision.datasets
- # ImageNet need to be downloaded manually
- # train_data = torchvision.datasets.ImageNet("../dataset", split='train', transform=torchvision.transforms.ToTensor())
- vgg16_false = torchvision.models.vgg16(weights=False)
- vgg16_true = torchvision.models.vgg16(weights=True)
- print(vgg16_true)
结果:
- VGG(
- (features): Sequential(
- (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (1): ReLU(inplace=True)
- (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (3): ReLU(inplace=True)
- (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (6): ReLU(inplace=True)
- (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (8): ReLU(inplace=True)
- (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (11): ReLU(inplace=True)
- (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (13): ReLU(inplace=True)
- (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (15): ReLU(inplace=True)
- (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (18): ReLU(inplace=True)
- (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (20): ReLU(inplace=True)
- (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (22): ReLU(inplace=True)
- (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (25): ReLU(inplace=True)
- (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (27): ReLU(inplace=True)
- (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (29): ReLU(inplace=True)
- (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )
- (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
- (classifier): Sequential(
- (0): Linear(in_features=25088, out_features=4096, bias=True)
- (1): ReLU(inplace=True)
- (2): Dropout(p=0.5, inplace=False)
- (3): Linear(in_features=4096, out_features=4096, bias=True)
- (4): ReLU(inplace=True)
- (5): Dropout(p=0.5, inplace=False)
- (6): Linear(in_features=4096, out_features=1000, bias=True)
- )
- )
结果显示:VGG16网络是由13层卷积层和3层全连接层组成,最后网络输出一共有1000个分类结果。
(3)、修改现有VGG16模型的结构
在VGG16模型后增加一个线性层,实现将VGG16的1000个类别输出为CIFAR10的10个类别,代码如下:
- # file : model_pretrained.py
- # time : 2022/8/5 下午4:19
- # function :
- import torchvision.datasets
- from torch import nn
- vgg16_false = torchvision.models.vgg16(weights=False)
- vgg16_true = torchvision.models.vgg16(weights=True)
- print(vgg16_true)
- train_data = torchvision.datasets.CIFAR10("../dataset", train=True, transform=torchvision.transforms.ToTensor(), download=False)
- vgg16_true.add_module('add_linear', nn.Linear(1000, 10))
- print(vgg16_true)
结果:
- VGG(
- (features): Sequential(
- (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (1): ReLU(inplace=True)
- (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (3): ReLU(inplace=True)
- (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (6): ReLU(inplace=True)
- (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (8): ReLU(inplace=True)
- (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (11): ReLU(inplace=True)
- (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (13): ReLU(inplace=True)
- (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (15): ReLU(inplace=True)
- (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (18): ReLU(inplace=True)
- (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (20): ReLU(inplace=True)
- (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (22): ReLU(inplace=True)
- (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (25): ReLU(inplace=True)
- (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (27): ReLU(inplace=True)
- (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (29): ReLU(inplace=True)
- (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )
- (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
- (classifier): Sequential(
- (0): Linear(in_features=25088, out_features=4096, bias=True)
- (1): ReLU(inplace=True)
- (2): Dropout(p=0.5, inplace=False)
- (3): Linear(in_features=4096, out_features=4096, bias=True)
- (4): ReLU(inplace=True)
- (5): Dropout(p=0.5, inplace=False)
- (6): Linear(in_features=4096, out_features=1000, bias=True)
- )
- )
- VGG(
- (features): Sequential(
- (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (1): ReLU(inplace=True)
- (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (3): ReLU(inplace=True)
- (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (6): ReLU(inplace=True)
- (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (8): ReLU(inplace=True)
- (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (11): ReLU(inplace=True)
- (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (13): ReLU(inplace=True)
- (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (15): ReLU(inplace=True)
- (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (18): ReLU(inplace=True)
- (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (20): ReLU(inplace=True)
- (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (22): ReLU(inplace=True)
- (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (25): ReLU(inplace=True)
- (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (27): ReLU(inplace=True)
- (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (29): ReLU(inplace=True)
- (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )
- (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
- (classifier): Sequential(
- (0): Linear(in_features=25088, out_features=4096, bias=True)
- (1): ReLU(inplace=True)
- (2): Dropout(p=0.5, inplace=False)
- (3): Linear(in_features=4096, out_features=4096, bias=True)
- (4): ReLU(inplace=True)
- (5): Dropout(p=0.5, inplace=False)
- (6): Linear(in_features=4096, out_features=1000, bias=True)
- )
- (add_linear): Linear(in_features=1000, out_features=10, bias=True)
- )
如果想将最后的线性层加在classifier中,则将代码修改如下:
- vgg16_true.classifier.add_module('add_linear', nn.Linear(1000, 10))
结果:
- VGG(
- (features): Sequential(
- (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (1): ReLU(inplace=True)
- (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (3): ReLU(inplace=True)
- (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (6): ReLU(inplace=True)
- (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (8): ReLU(inplace=True)
- (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (11): ReLU(inplace=True)
- (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (13): ReLU(inplace=True)
- (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (15): ReLU(inplace=True)
- (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (18): ReLU(inplace=True)
- (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (20): ReLU(inplace=True)
- (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (22): ReLU(inplace=True)
- (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (25): ReLU(inplace=True)
- (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (27): ReLU(inplace=True)
- (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (29): ReLU(inplace=True)
- (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )
- (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
- (classifier): Sequential(
- (0): Linear(in_features=25088, out_features=4096, bias=True)
- (1): ReLU(inplace=True)
- (2): Dropout(p=0.5, inplace=False)
- (3): Linear(in_features=4096, out_features=4096, bias=True)
- (4): ReLU(inplace=True)
- (5): Dropout(p=0.5, inplace=False)
- (6): Linear(in_features=4096, out_features=1000, bias=True)
- (add_linear): Linear(in_features=1000, out_features=10, bias=True)
- )
如果想将在classifier最后一层输出改成10中,则将代码修改如下:
- # file : model_pretrained.py
- # time : 2022/8/5 下午4:19
- # function :
- import torchvision.datasets
- from torch import nn
- vgg16_false = torchvision.models.vgg16(weights=False)
- print(vgg16_false)
- vgg16_false.classifier[6] = nn.Linear(4096, 10)
- print(vgg16_false)
- VGG(
- (features): Sequential(
- (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (1): ReLU(inplace=True)
- (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (3): ReLU(inplace=True)
- (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (6): ReLU(inplace=True)
- (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (8): ReLU(inplace=True)
- (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (11): ReLU(inplace=True)
- (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (13): ReLU(inplace=True)
- (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (15): ReLU(inplace=True)
- (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (18): ReLU(inplace=True)
- (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (20): ReLU(inplace=True)
- (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (22): ReLU(inplace=True)
- (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (25): ReLU(inplace=True)
- (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (27): ReLU(inplace=True)
- (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (29): ReLU(inplace=True)
- (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )
- (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
- (classifier): Sequential(
- (0): Linear(in_features=25088, out_features=4096, bias=True)
- (1): ReLU(inplace=True)
- (2): Dropout(p=0.5, inplace=False)
- (3): Linear(in_features=4096, out_features=4096, bias=True)
- (4): ReLU(inplace=True)
- (5): Dropout(p=0.5, inplace=False)
- (6): Linear(in_features=4096, out_features=1000, bias=True)
- )
- )
- VGG(
- (features): Sequential(
- (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (1): ReLU(inplace=True)
- (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (3): ReLU(inplace=True)
- (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (6): ReLU(inplace=True)
- (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (8): ReLU(inplace=True)
- (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (11): ReLU(inplace=True)
- (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (13): ReLU(inplace=True)
- (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (15): ReLU(inplace=True)
- (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (18): ReLU(inplace=True)
- (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (20): ReLU(inplace=True)
- (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (22): ReLU(inplace=True)
- (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (25): ReLU(inplace=True)
- (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (27): ReLU(inplace=True)
- (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
- (29): ReLU(inplace=True)
- (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- )
- (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
- (classifier): Sequential(
- (0): Linear(in_features=25088, out_features=4096, bias=True)
- (1): ReLU(inplace=True)
- (2): Dropout(p=0.5, inplace=False)
- (3): Linear(in_features=4096, out_features=4096, bias=True)
- (4): ReLU(inplace=True)
- (5): Dropout(p=0.5, inplace=False)
- (6): Linear(in_features=4096, out_features=10, bias=True)
- )
- )
pytorch学习笔记(8)--现有模型的使用和修改的更多相关文章
- [PyTorch 学习笔记] 3.1 模型创建步骤与 nn.Module
本章代码:https://github.com/zhangxiann/PyTorch_Practice/blob/master/lesson3/module_containers.py 这篇文章来看下 ...
- [PyTorch 学习笔记] 7.1 模型保存与加载
本章代码: https://github.com/zhangxiann/PyTorch_Practice/blob/master/lesson7/model_save.py https://githu ...
- PyTorch学习笔记之CBOW模型实践
import torch from torch import nn, optim from torch.autograd import Variable import torch.nn.functio ...
- PyTorch学习笔记之n-gram模型实现
import torch import torch.nn as nn from torch.autograd import Variable import torch.nn.functional as ...
- V-rep学习笔记:机器人模型创建2—添加关节
下面接着之前经过简化并调整好视觉效果的模型继续工作流,为了使模型能受控制运动起来必须在合适的位置上添加相应的运动副/关节.一般情况下我们可以查阅手册或根据设计图纸获得这些关节的准确位置和姿态,知道这些 ...
- 操作系统学习笔记----进程/线程模型----Coursera课程笔记
操作系统学习笔记----进程/线程模型----Coursera课程笔记 进程/线程模型 0. 概述 0.1 进程模型 多道程序设计 进程的概念.进程控制块 进程状态及转换.进程队列 进程控制----进 ...
- V-rep学习笔记:机器人模型创建3—搭建动力学模型
接着之前写的V-rep学习笔记:机器人模型创建2—添加关节继续机器人创建流程.如果已经添加好关节,那么就可以进入流程的最后一步:搭建层次结构模型和模型定义(build the model hierar ...
- ArcGIS模型构建器案例学习笔记-字段处理模型集
ArcGIS模型构建器案例学习笔记-字段处理模型集 联系方式:谢老师,135-4855-4328,xiexiaokui@qq.com 由四个子模型组成 子模型1:判断字段是否存在 方法:python工 ...
- Dubbo -- 系统学习 笔记 -- 示例 -- 线程模型
Dubbo -- 系统学习 笔记 -- 目录 示例 想完整的运行起来,请参见:快速启动,这里只列出各种场景的配置方式 线程模型 事件处理线程说明 如果事件处理的逻辑能迅速完成,并且不会发起新的IO请求 ...
- Django学习笔记之数据库-模型的操作
模型的操作 在ORM框架中,所有模型相关的操作,比如添加/删除等.其实都是映射到数据库中一条数据的操作.因此模型操作也就是数据库表中数据的操作. 添加模型 添加模型到数据库中.首先需要创建一个模型.创 ...
随机推荐
- web基础(3):CSS样式
chapter4 CSS样式 html是网页的内容和结构:CSS是网页的样式.内容和样式相分离,便于修改样式. CSS cascading style sheets 层叠样式表.一个内容上面可以添加多 ...
- Pytorch之数据处理
使用TensorDataset和DataLoader来简化 from torch.utils.data import TensorDataset from torch.utils.data imp ...
- Linux 安装jdk教程
https://www.cnblogs.com/mabiao008/p/12059069.html
- 李光耀观天下.PDF
书本详情 李光耀观天下作者: 李光耀出版社: 北京大学出版社原作名: 李光耀观天下出版年: 2015-6页数: 299装帧: 平装ISBN: 978730125751
- OS-lab5
OS-lab5 磁盘管理 完成文件系统的第一步就是要能够处理磁盘等外设的信息. lib/syscall_all.c 处理磁盘的信息,最基本的就是对磁盘进行读写操作. sys_write_dev函数用于 ...
- 如何在win10网络中发现自己?
第一步:win10共享媒体流的操作步骤: 1.点击右下角网络-网络和Internet设置 2.进入设置界面后点击网络和共享中心 3.在共享中心界面点击媒体流式处理选项 4.点击启用媒体流 5.设置媒体 ...
- automagic webUI 自动化
https://www.cnblogs.com/tsbc/p/6244268.html
- Linux 第十二节(samba NFS )
samba 跨平台共享,基于smb协议. NFS yum install samba cd /etc/samba //samba配置文件 mv smb.conf smb.conf_bak ...
- k8s中pv和pvc
转载: https://blog.csdn.net/echizao1839/article/details/125766826
- selenium+python的网站爬虫
爬取网站听起来就是程序员的标配,之前一直没有时间学一下,最近有空学习一下顺便记录一下 爬取网站实际上就是利用计算机模拟人的操作来对网站的前端进行访问,而各大浏览器也给计算机提供了访问的接口,也就是浏览 ...