【pytorch】pytorch基础学习
1. 前言
最近在学习pytorch,先照着官方的“60分钟教程”学习了一下,然后再github上找了两个star比较多的项目,自己写了一下,学习一下别人的写法。
# 2. Deep Learning with PyTorch: A 60 Minute Blitz
2.1 base operations
# base operations
import torch
x = torch.empty(5, 3)
print(x)
x = torch.rand(1, 2)
print(x)
x = torch.zeros(4,5, dtype = torch.long)
print(x)
x = torch.tensor([5.5, 3])
print(x)
x = x.new_ones(5, 3, dtype=torch.double)
print(x)
x = torch.randn_like(x, dtype=torch.float)
print(x)
print(x.size())
y = torch.rand(5,3)
print(x+y)
print(torch.add(x, y))
result = torch.empty(5,3)
torch.add(x, y, out=result)
print(result)
result = x + y
print('result = ', result)
result = torch.add(x, y)
print('result2 = ', result)
y.add_(x)
print(y)
# 转置
y.t_()
print(y)
# resize
x = torch.randn(4,4)
y = x.view(16)
z = x.view(-1, 8)
print(x.size(), y.size(), z.size())
print(y)
# 只有一个元素的时候
# If you have a one element tensor, use .item() to get the value as a Python number
x = torch.randn(1)
print(x)
print(x.item())
# numpy bridge
a = torch.ones(5)
b = a.numpy()
print(a)
print(b)
a.add_(1)
print(a)
print(b)
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out = a)
print(a)
print(b)
# cuda
if torch.cuda.is_available():
device = torch.device("cuda")
y = torch.ones_like(x, device=device)
x = x.to(device)
z = x + y
print(z)
print(z.to("cpu", torch.double))
# Neural Networks
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
# python中的super( test, self).__init__()
# 首先找到test的父类(比如是类A),然后把类test的对象self转换为类A的对象,然后“被转换”的类A对象调用自己的__init__函数
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
'''
Net(
(conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
'''
'''
You just have to define the forward function, and the backward function (where gradients are computed) is
automatically defined for you using autograd. You can use any of the Tensor operations in the forward function. x
'''
# The learnable parameters of a model are returned by net.parameters()
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
# output
'''
10
torch.Size([6, 1, 5, 5])
'''
# Let try a random 32x32 input Note: Expected input size to this net(LeNet) is 32x32. To use this net on MNIST dataset, please resize the images from the dataset to 32x32.
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
# Zero the gradient buffers of all parameters and backprops with random gradients:(初始化)
net.zero_grad()
out.backward(torch.randn(1, 10))
'''
torch.nn only supports mini-batches. The entire torch.nn package only supports inputs that are
a mini-batch of samples, and not a single sample.
For example, nn.Conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Width.
If you have a single sample, just use input.unsqueeze(0) to add a fake batch dimension.
'''
# Recap
'''
Recap:
torch.Tensor - A multi-dimensional array with support for autograd operations like backward(). Also holds the gradient w.r.t. the tensor.
nn.Module - Neural network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.
nn.Parameter - A kind of Tensor, that is automatically registered as a parameter when assigned as an attribute to a Module.
autograd.Function - Implements forward and backward definitions of an autograd operation. Every Tensor operation, creates at least a single Function node,
that connects to functions that created a Tensor and encodes its history.
'''
output = net(input)
target = torch.randn(10) #a dummy target, for example
target = target.view(1, -1) # # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
#So, when we call loss.backward(), the whole graph is differentiated w.r.t. the loss,
# and all Tensors in the graph that has requires_grad=True will have their .grad Tensor accumulated with the gradient.
#For illustration, let us follow a few steps backward:
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
# BACKPROP
# To backpropagate the error all we have to do is to loss.backward(). You need to clear the existing gradients though, else gradients will be accumulated to existing gradients.
# Now we shall call loss.backward(), and have a look at conv1’s bias gradients before and after the backward.
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
# Observe how gradient buffers had to be manually set to zero using optimizer.zero_grad(). This is because gradients are accumulated as explained in Backprop section.
import torch.optim as optim
# create your optimizer
Optimizer = optim.SGD(net.parameters(), lr= 0.01)
# in your training loop:
Optimizer.zero_grad() # 梯度清零 防止叠加
output = net(input)
loss = criterion(output, target)
loss.backward()
Optimizer.step() ## Does the update
2.2 train a classifier
# 1. load and normalize
import torch
import torchvision
import torchvision.transforms as transforms
# The output of torchvision datasets are PILImage images of range [0, 1]. We transform them to Tensors of normalized range [-1, 1].
# PILImage: python imaging Library ,python平台上图像处理标准库
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) # 进行Normalize之前必须totensor
trainset = torchvision.datasets.CIFAR10(root = './data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=0)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=0)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship','truck')
# show training images
'''
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.show(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
'''
# 2.define a foreward network
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# first, define convolutional filter and weights of fc
self.conv1 = nn.Conv2d(3, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.max_pool = nn.MaxPool2d(2,2)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.max_pool(x)
x = F.relu(self.conv2(x))
x = self.max_pool(x)
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
# 3. choose a optimizer
import torch.optim as optim
criterion = nn.CrossEntropyLoss() # loss
optimizer = optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
# 4. train the network
for epoch in range(2): # loop over the whole dataset multiple times
total_loss = 0
for i, data in enumerate(trainloader, 0): # start from 0
optimizer.zero_grad() # clean the gradient every batch
inputs, labels = data # get a batch data
outputs = net(inputs) # get the outputs
loss = criterion(outputs, labels) # get the constant loss
loss.backward() # compute every parameter's gradient
optimizer.step() # update the parameter
total_loss += loss.item()
# print the information
if i % 2000 == 1999: # print the loss every 2000 mini-batches
print('[%d %5d] loss: %.3f' % (epoch+1, i+1, total_loss / 2000))
total_loss = 0
print('Finished Training')
# 5.test the network on the test data
'''
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
outputs = net(images)
'''
# get the accurate
total = 0
correct = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, prediction = torch.max(outputs.data, 1) #torch.max(a,1/0) 返回每一行/列中最大值的那个元素,且返回其索引(返回最大元素在这一行的列索引),_用来获取值,但没用
# outputs.data 应该是获取 tensor
total += labels.size(0)
correct += (prediction == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))
# get the class_accurate
class_correct = list(0. for i in range(10)) # create a list for 10 class
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, prediction = torch.max(outputs, 1)
c = (prediction == labels).squeeze() # get a list of series class
for i in range(4): # 4 is batch_size
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
# train on single GPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
net.to(device)
inputs, labels = inputs.to(device), labels.to(device)
# train on multiple GPU
'''
DataParallel splits your data automatically and sends job orders to multiple models on several GPUs. After
each model finishes their job, DataParallel collects and merges the results before returning it to you.
'''
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
# Parameters and DataLoaders
input_size = 5
output_size = 2
batch_size = 30
data_size = 100
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# dummy dataset
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
rand_loader = DataLoader(dataset=RandomDataset(input_size, data_size),
batch_size=batch_size, shuffle=True)
class Model(nn.Module):
# Our model
def __init__(self, input_size, output_size):
super(Model, self).__init__()
self.fc = nn.Linear(input_size, output_size)
def forward(self, input):
output = self.fc(input)
print("\tIn Model: input size", input.size(),
"output size", output.size())
return output
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
model.to(device)
for data in rand_loader:
input = data.to(device)
output = model(input)
print("Outside: input size", input.size(),
"output_size", output.size())
'''
If you have 2, you will see:
# on 2 GPUs
Let's use 2 GPUs!
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])
'''
'''If you have 3 GPUs, you will see:
Let's use 3 GPUs!
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])'''
'''If you have 8, you will see:
Let's use 8 GPUs!
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])'''
3 规范化pytorch训练MNIST数据集
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
import argparse
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, 5)
self.conv2 = nn.Conv2d(10, 20, 5)
self.conv2_dropout = nn.Dropout2d()
self.max_pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(self.max_pool(self.conv1(x)))
x = F.relu(self.max_pool(self.conv2_dropout(self.conv2(x)))) # dropout need before relu operation
x = x.view(-1, 320)
# print(x.size())
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1) # dim 代表纬度, 加了softmax, 相当于返回就是一个概率
def train(args, model, device, train_loader, optimizer, epoch):
model.train() # 当有drop跟nolizition等操作的时候,需要加上这一句
for i, data in enumerate(train_loader):
images, labels = data
images = images.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(images)
loss = F.nll_loss(outputs, labels) # loss = F.nll_loss()
loss.backward()
optimizer.step()
# get the information
if i % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, i * len(images), len(train_loader.dataset),
100. * i / len(train_loader), loss.item())) # 这里没有取log_interval个sample的loss的平均,即某一个log_interval的loss
def test(args, model, device, test_loader):
model.eval()
correct_num = 0
test_loss = 0.
with torch.no_grad():
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, prediction = torch.max(outputs.data, 1) # prediction里面是一维索引
correct_num += (prediction == labels).sum().item() # 获得一个batch正确的数量
test_loss += F.nll_loss(outputs, labels, reduction='sum').item() # sum up batch loss
total_num = len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss / total_num, correct_num, total_num,
100. * correct_num / total_num))
def main():
parser = argparse.ArgumentParser(description='PyTorch MNIST')
parser.add_argument('--batch-size', type=int, default=64, metavar='M',
help='input batch size for training (default: 64)')
parser.add_argument('--test_batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=10, metavar='N',
help='number of epochs to train (default: 10)')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate (default: 0.01)')
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.test_batch_size, shuffle=True, **kwargs)
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
test(args, model, device, test_loader)
if __name__ == '__main__':
main()
【pytorch】pytorch基础学习的更多相关文章
- pytorch基础学习(一)
在炼丹师的路上越走越远,开始入手pytorch框架的学习,越炼越熟吧... 1. 张量的创建和操作 创建为初始化矩阵,并初始化 a = torch.empty(, ) #创建一个5*3的未初始化矩阵 ...
- Pytorch的基础数据类型
引言 本篇介绍Pytorch的基础数据类型,判断方式以及常用向量 基础数据类型 torch.Tensor是一种包含单一数据类型元素的多维矩阵. 目前在1.2版本中有9种类型. 同python相比,py ...
- Note | PyTorch官方教程学习笔记
目录 1. 快速入门PYTORCH 1.1. 什么是PyTorch 1.1.1. 基础概念 1.1.2. 与NumPy之间的桥梁 1.2. Autograd: Automatic Differenti ...
- 【PyTorch深度学习】学习笔记之PyTorch与深度学习
第1章 PyTorch与深度学习 深度学习的应用 接近人类水平的图像分类 接近人类水平的语音识别 机器翻译 自动驾驶汽车 Siri.Google语音和Alexa在最近几年更加准确 日本农民的黄瓜智能分 ...
- pytorch怎么入门学习
pytorch怎么入门学习 https://www.zhihu.com/question/55720139
- 使用PyTorch进行迁移学习
概述 迁移学习可以改变你建立机器学习和深度学习模型的方式 了解如何使用PyTorch进行迁移学习,以及如何将其与使用预训练的模型联系起来 我们将使用真实世界的数据集,并比较使用卷积神经网络(CNNs) ...
- Pytorch线性规划模型 学习笔记(一)
Pytorch线性规划模型 学习笔记(一) Pytorch视频学习资料参考:<PyTorch深度学习实践>完结合集 Pytorch搭建神经网络的四大部分 1. 准备数据 Prepare d ...
- pytorch 测试 迁移学习
训练源码: 源码仓库:https://github.com/pytorch/tutorials 迁移学习测试代码:tutorials/beginner_source/transfer_learning ...
- Day1 Python基础学习
一.编程语言分类 1.简介 机器语言:站在计算机的角度,说计算机能听懂的语言,那就是直接用二进制编程,直接操作硬件 汇编语言:站在计算机的角度,简写的英文标识符取代二进制去编写程序,本质仍然是直接操作 ...
- Day1 Python基础学习——概述、基本数据类型、流程控制
一.Python基础学习 一.编程语言分类 1.简介 机器语言:站在计算机的角度,说计算机能听懂的语言,那就是直接用二进制编程,直接操作硬件 汇编语言:站在计算机的角度,简写的英文标识符取代二进制去编 ...
随机推荐
- Python 之网络编程
# 流程描述: # # 1. 服务器根据地址类型(ipv4, ipv6), socket类型, 协议创建socket; # # 2. 服务器为socket绑定ip地址和端口号; # # 3. 服务器s ...
- ntpdate同步更新时间
Linux服务器运行久时,系统时间就会存在一定的误差,一般情况下可以使用date命令进行时间设置,但在做数据库集群分片等操作时对多台机器的时间差是有要求的,此时就需要使用ntpdate进行时间同步 1 ...
- 未安装git lfs导致git下载不完整,没有错误提示
git clone命令没有报错. --recursive选项也加上了. cmake命令没有报错 make命令出错. 最后发现是因为没有安装git lfs,导致大文件下载不完整.最坑的是下载的时候也没有 ...
- redis之数据操作详解
redis是一个key-value存储系统.和Memcached类似,它支持存储的value类型相对更多,包括string(字符串).list(链表).set(集合).zset(sorted set ...
- 华为交换机VRP用户界面配置及Telnet登录实验
user privilege level level 设置使用以上用户界面登录后的用户级别 5 acl acl-number { inbound | outbound } (可选)在用户界面上应用AC ...
- 揭秘DOM中data和nodeValue属性同步改变那些事
问题引发:最近在整理DOM系列的一些知识点,发现在DOM的某些接口API中,存在一些我想不通的现象.就随便举个例子吧:DOM文档模型中的文本节点,可以通过nodeValue或data属性访问文本节点的 ...
- Cookie , Session ,Session 劫持简单总结
cookie 机制: Cookies 是 服务器 在 本地机器 上存储的 小段文本,并伴随着 每一个请求,发送到 同一台 服务器. 网络服务器 用 HTTP头 向客户端发送 Cookies.在客户端, ...
- AngularJS 笔记系列(三)模块和作用域
模块: 在 AngularJS 中,将函数代码全部都定义在全局命名空间中绝对不是什么好主意,全局变量污染会使冲突几率变大,调试困难,降低开发效率.上次写计时器的 controller 时,我们把 co ...
- Java基础知识陷阱(八)
本文发表于本人博客. 这次我来说说关于&跟&&的区别,大家都知道&是位运算符,而&&是逻辑运算符,看下面代码: public static void m ...
- Http协议面试题
1.说一下什么是Http协议? 对器客户端和 服务器端之间数据传输的格式规范,格式简称为“超文本传输协议”. 2.什么是Http协议无状态协议?怎么解决Http协议无状态协议?(曾经去某创业公司问到) ...