(原)torch的训练过程
转载请注明出处:
http://www.cnblogs.com/darkknightzh/p/6221622.html
参考网址:
http://ju.outofmemory.cn/entry/284587
https://github.com/torch/nn/blob/master/doc/criterion.md
1. 使用updateParameters
假设已经有了model=setupmodel(自己建立的模型),同时也有自己的训练数据input,实际输出outReal,以及损失函数criterion(参见第二个网址),则使用torch训练过程如下:
-- given model, criterion, input, outReal
model:training()
model:zeroGradParameters()
outPredict = model:forward(input)
err= criterion:forward(outPredict, outReal)
grad_criterion = criterion:backward(outPredict, outReal)
model:backward(input, grad_criterion)
model:updateParameters(learningRate)
上面第1行假定已知的参数
第2行设置为训练模式
第3行将model中每个模块保存的梯度清零(防止之前的干扰此次迭代)
第4行将输入input通过model,得到预测的输出outPredict
第5行通过损失函数计算在当前参数下模型的预测输出outPredict和实际输出outReal的误差err
第6行通过预测输出outPredict和实际输出outReal计算损失函数的梯度grad_criterion
第7行反向计算model中每个模块的梯度
第8行更新model每个模块的参数
每次迭代时,均需要执行第3行至第8行。
=========================================================
2. 使用optim
170301更新:
http://x-wei.github.io/learn-torch-6-optim.html
中给出了更方便的方式(是否方便也说不清楚),可以使用torch中的optim来更新参数(直接使用model:updateParameters的话,只能使用最简单的梯度下降算法,optmi中封装了很多算法,梯度下降,adam之类的)。
params_new, fs, ... = optim._method_(feval, params[, config][, state])
其中,param:当前参数向量(1D的tensro),在优化时会被更新
feval:用户自定义的闭包,类似于f, df/dx = feval(x)
config:一个包含算法参数(如learning rate)的table
state:包含状态变量的table
params_new:最小化函数f的新的结果参数(1D的tensor)
fs:a table of f values evaluated during the optimization, fs[#fs] is the optimized function value
注意:由于optmi需要输入数据为1D的tensor,因而需要将模型中的参数变成拉平,通过下面的函数来实现:
params, gradParams = model:getParameters()
params和gradParams均为1D的tensor。
使用上面的方法后,开始得程序可以修改为:
-- given model, criterion, input, outReal, optimState
local params, gradParams = model:getParameters() local function feval()
return criterion.output, gradParams
end for ...
model:training()
model:zeroGradParameters()
outPredict = model:forward(input)
err= criterion:forward(outPredict, outReal)
grad_criterion = criterion:backward(outPredict, outReal)
model:backward(input, grad_criterion) optim.sgd(feval, params, optimState)
end
170301更新结束
=========================================================
3. 使用model:backward注意的问题
170405更新
需要注意的是,model:backward一定要和model:forward对应。
https://github.com/torch/nn/blob/master/doc/module.md中[gradInput] backward(input, gradOutput)写着:
In general this method makes the assumption forward(input) has been called before, with the same input. This is necessary for optimization reasons. If you do not respect this rule, backward() will compute incorrect gradients.
应该是由于backward时,可能会使用forward的某些中间变量,因而backward执行前,必须先执行forward,否则中间变量和backward不对应,导致结果错误。
我这边之前的程序由于最初forward后,保存的是最后一次forward时的中间变量,因而backward时的结果总是不正确(见method5注释的那句)。
只能使用比较坑的方式解决,之前先forward,最终在backward之前,在forward一次,这样能保证结果正确(缺点就是增加了一次计算。。。),代码如method5。
说明:method1为常规的batch的方法。但是该方法对显存要求较高。因而可以使用类似caffe中的iter_size的方式,如method2的方法(和caffe中的iter_size不完全一样)。如果需要使用更多的样本,同时criterion时使用尽可能多的样本,则前两种方法均会出现问题,此时可以使用method3的方法(但是实际上method3有问题,loss收敛的很慢)。method4对method3进行了进一步的改进及测试,如果method4注释那两行,则其收敛正常,但是不注释那两行,则收敛出现问题,和method3收敛类似。method5进行了最终的改进。该程序能正常收敛。同时为了验证forward和backward要对应,将method5中注释的取消注释,同时注释掉上面一行,可以看出,其收敛很慢(和method3,4类似)。下面是各种method前10次的的收敛情况。
程序如下:
require 'torch'
require 'nn'
require 'optim'
require 'cunn'
require 'cutorch'
local mnist = require 'mnist' local fullset = mnist.traindataset()
local testset = mnist.testdataset() local trainset = {
size = ,
data = fullset.data[{{,}}]:double(),
label = fullset.label[{{,}}]
}
trainset.data = trainset.data - trainset.data:mean()
trainset.data = trainset.data:cuda()
trainset.label = trainset.label:cuda() local validationset = {
size = ,
data = fullset.data[{{,}}]:double(),
label = fullset.label[{{,}}]
}
validationset.data = validationset.data - validationset.data:mean()
validationset.data = validationset.data:cuda()
validationset.label = validationset.label:cuda() local model = nn.Sequential()
model:add(nn.Reshape(, , ))
model:add(nn.MulConstant(/256.0*3.2))
model:add(nn.SpatialConvolutionMM(, , , , , , , ))
model:add(nn.SpatialMaxPooling(, , , , , ))
model:add(nn.SpatialConvolutionMM(, , , , , , , ))
model:add(nn.SpatialMaxPooling(, , , , , ))
model:add(nn.Reshape(**))
model:add(nn.Linear(**, ))
model:add(nn.ReLU())
model:add(nn.Linear(, ))
model:add(nn.LogSoftMax()) model = require('weight-init')(model, 'xavier')
model = model:cuda() x, dl_dx = model:getParameters() local criterion = nn.ClassNLLCriterion():cuda() local sgd_params = {
learningRate = 1e-2,
learningRateDecay = 1e-4,
weightDecay = 1e-3,
momentum = 1e-4
} local training = function(batchSize)
local current_loss =
local count =
local shuffle = torch.randperm(trainset.size)
batchSize = batchSize or
for t = , trainset.size-, batchSize do
-- setup inputs and targets for batch iteration
local size = math.min(t + batchSize, trainset.size) - t
local inputs = torch.Tensor(size, , ):cuda()
local targets = torch.Tensor(size):cuda()
for i = , size do
inputs[i] = trainset.data[shuffle[i+t]]
targets[i] = trainset.label[shuffle[i+t]] +
end local feval = function(x_new)
local miniBatchSize =
if x ~= x_new then x:copy(x_new) end -- reset data
dl_dx:zero() --[[ ------------------ method 1 original batch
local outputs = model:forward(inputs)
local loss = criterion:forward(outputs, targets)
local gradInput = criterion:backward(outputs, targets)
model:backward(inputs, gradInput)
--]] --[[ ------------------ method 2 iter-size with batch
local loss = 0
for idx = 1, batchSize, miniBatchSize do
local outputs = model:forward(inputs[{{idx, idx + miniBatchSize - 1}}])
loss = loss + criterion:forward(outputs, targets[{{idx, idx + miniBatchSize - 1}}])
local gradInput = criterion:backward(outputs, targets[{{idx, idx + miniBatchSize - 1}}])
model:backward(inputs[{{idx, idx + miniBatchSize - 1}}], gradInput)
end
dl_dx:mul(1.0 * miniBatchSize / batchSize)
loss = loss * miniBatchSize / batchSize
--]] --[[ ------------------ method 3 mini-batch in batch
local outputs = torch.Tensor(batchSize, 10):zero():cuda()
for idx = 1, batchSize, miniBatchSize do
outputs[{{idx, idx + miniBatchSize - 1}}]:copy(model:forward(inputs[{{idx, idx + miniBatchSize - 1}}]))
end
local loss = 0
for idx = 1, batchSize, miniBatchSize do
loss = loss + criterion:forward(outputs[{{idx, idx + miniBatchSize - 1}}],
targets[{{idx, idx + miniBatchSize - 1}}])
end
local gradInput = torch.Tensor(batchSize, 10):zero():cuda()
for idx = 1, batchSize, miniBatchSize do
gradInput[{{idx, idx + miniBatchSize - 1}}]:copy(criterion:backward(
outputs[{{idx, idx + miniBatchSize - 1}}], targets[{{idx, idx + miniBatchSize - 1}}]))
end
for idx = 1, batchSize, miniBatchSize do
model:backward(inputs[{{idx, idx + miniBatchSize - 1}}], gradInput[{{idx, idx + miniBatchSize - 1}}])
end
dl_dx:mul( 1.0 * miniBatchSize / batchSize)
loss = loss * miniBatchSize / batchSize
--]] --[[ ------------------ method 4 mini-batch in batch
local outputs = torch.Tensor(batchSize, 10):zero():cuda()
local loss = 0
local gradInput = torch.Tensor(batchSize, 10):zero():cuda()
for idx = 1, batchSize, miniBatchSize do
outputs[{{idx, idx + miniBatchSize - 1}}]:copy(model:forward(inputs[{{idx, idx + miniBatchSize - 1}}]))
loss = loss + criterion:forward(outputs[{{idx, idx + miniBatchSize - 1}}],
targets[{{idx, idx + miniBatchSize - 1}}])
gradInput[{{idx, idx + miniBatchSize - 1}}]:copy(criterion:backward(
outputs[{{idx, idx + miniBatchSize - 1}}], targets[{{idx, idx + miniBatchSize - 1}}]))
-- end
-- for idx = 1, batchSize, miniBatchSize do
model:backward(inputs[{{idx, idx + miniBatchSize - 1}}], gradInput[{{idx, idx + miniBatchSize - 1}}])
end dl_dx:mul( 1.0 * miniBatchSize / batchSize)
loss = loss * miniBatchSize / batchSize
--]] ---[[ ------------------ method 5 mini-batch in batch
local loss =
local gradInput = torch.Tensor(batchSize, ):zero():cuda() for idx = , batchSize, miniBatchSize do
local outputs = model:forward(inputs[{{idx, idx + miniBatchSize - }}])
loss = loss + criterion:forward(outputs, targets[{{idx, idx + miniBatchSize - }}])
gradInput[{{idx, idx + miniBatchSize - }}]:copy(criterion:backward(outputs, targets[{{idx, idx + miniBatchSize - }}]))
end for idx = , batchSize, miniBatchSize do
model:forward(inputs[{{idx, idx + miniBatchSize - }}])
--model:forward(inputs[{{batchSize - miniBatchSize + 1, batchSize}}])
model:backward(inputs[{{idx, idx + miniBatchSize - }}], gradInput[{{idx, idx + miniBatchSize - }}])
end dl_dx:mul( 1.0 * miniBatchSize / batchSize)
loss = loss * miniBatchSize / batchSize
--]] return loss, dl_dx
end _, fs = optim.sgd(feval, x, sgd_params) count = count +
current_loss = current_loss + fs[]
end return current_loss / count -- normalize loss
end local eval = function(dataset, batchSize)
local count =
batchSize = batchSize or for i = , dataset.size, batchSize do
local size = math.min(i + batchSize - , dataset.size) - i
local inputs = dataset.data[{{i,i+size-}}]:cuda()
local targets = dataset.label[{{i,i+size-}}]
local outputs = model:forward(inputs)
local _, indices = torch.max(outputs, )
indices:add(-)
indices = indices:cuda()
local guessed_right = indices:eq(targets):sum()
count = count + guessed_right
end return count / dataset.size
end local max_iters =
local last_accuracy =
local decreasing =
local threshold = -- how many deacreasing epochs we allow
for i = ,max_iters do
-- timer = torch.Timer() model:training()
local loss = training() model:evaluate()
local accuracy = eval(validationset)
print(string.format('Epoch: %d Current loss: %4f; validation set accu: %4f', i, loss, accuracy))
if accuracy < last_accuracy then
if decreasing > threshold then break end
decreasing = decreasing +
else
decreasing =
end
last_accuracy = accuracy --print(' Time elapsed: ' .. i .. 'iter: ' .. timer:time().real .. ' seconds')
end testset.data = testset.data:double()
eval(testset)
weight-init.lua
--
-- Different weight initialization methods
--
-- > model = require('weight-init')(model, 'heuristic')
--
require("nn") -- "Efficient backprop"
-- Yann Lecun, 1998
local function w_init_heuristic(fan_in, fan_out)
return math.sqrt(/(*fan_in))
end -- "Understanding the difficulty of training deep feedforward neural networks"
-- Xavier Glorot, 2010
local function w_init_xavier(fan_in, fan_out)
return math.sqrt(/(fan_in + fan_out))
end -- "Understanding the difficulty of training deep feedforward neural networks"
-- Xavier Glorot, 2010
local function w_init_xavier_caffe(fan_in, fan_out)
return math.sqrt(/fan_in)
end -- "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification"
-- Kaiming He, 2015
local function w_init_kaiming(fan_in, fan_out)
return math.sqrt(/(fan_in + fan_out))
end local function w_init(net, arg)
-- choose initialization method
local method = nil
if arg == 'heuristic' then method = w_init_heuristic
elseif arg == 'xavier' then method = w_init_xavier
elseif arg == 'xavier_caffe' then method = w_init_xavier_caffe
elseif arg == 'kaiming' then method = w_init_kaiming
else
assert(false)
end -- loop over all convolutional modules
for i = , #net.modules do
local m = net.modules[i]
if m.__typename == 'nn.SpatialConvolution' then
m:reset(method(m.nInputPlane*m.kH*m.kW, m.nOutputPlane*m.kH*m.kW))
elseif m.__typename == 'nn.SpatialConvolutionMM' then
m:reset(method(m.nInputPlane*m.kH*m.kW, m.nOutputPlane*m.kH*m.kW))
elseif m.__typename == 'cudnn.SpatialConvolution' then
m:reset(method(m.nInputPlane*m.kH*m.kW, m.nOutputPlane*m.kH*m.kW))
elseif m.__typename == 'nn.LateralConvolution' then
m:reset(method(m.nInputPlane**, m.nOutputPlane**))
elseif m.__typename == 'nn.VerticalConvolution' then
m:reset(method(*m.kH*m.kW, *m.kH*m.kW))
elseif m.__typename == 'nn.HorizontalConvolution' then
m:reset(method(*m.kH*m.kW, *m.kH*m.kW))
elseif m.__typename == 'nn.Linear' then
m:reset(method(m.weight:size(), m.weight:size()))
elseif m.__typename == 'nn.TemporalConvolution' then
m:reset(method(m.weight:size(), m.weight:size()))
end if m.bias then
m.bias:zero()
end
end
return net
end return w_init
Method 1 Epoch: 1 Current loss: 0.616950; validation set accu: 0.920900
Epoch: 2 Current loss: 0.228665; validation set accu: 0.942400
Epoch: 3 Current loss: 0.168047; validation set accu: 0.957900
Epoch: 4 Current loss: 0.134796; validation set accu: 0.961800
Epoch: 5 Current loss: 0.113071; validation set accu: 0.966200
Epoch: 6 Current loss: 0.098782; validation set accu: 0.968800
Epoch: 7 Current loss: 0.088252; validation set accu: 0.970000
Epoch: 8 Current loss: 0.080225; validation set accu: 0.971200
Epoch: 9 Current loss: 0.073702; validation set accu: 0.972200
Epoch: 10 Current loss: 0.068171; validation set accu: 0.972400 method 2
Epoch: 1 Current loss: 0.624633; validation set accu: 0.922200
Epoch: 2 Current loss: 0.238459; validation set accu: 0.945200
Epoch: 3 Current loss: 0.174089; validation set accu: 0.959000
Epoch: 4 Current loss: 0.140234; validation set accu: 0.963800
Epoch: 5 Current loss: 0.116498; validation set accu: 0.968000
Epoch: 6 Current loss: 0.101376; validation set accu: 0.968800
Epoch: 7 Current loss: 0.089484; validation set accu: 0.972600
Epoch: 8 Current loss: 0.080812; validation set accu: 0.973000
Epoch: 9 Current loss: 0.073929; validation set accu: 0.975100
Epoch: 10 Current loss: 0.068330; validation set accu: 0.975400 method 3
Epoch: 1 Current loss: 2.202240; validation set accu: 0.548500
Epoch: 2 Current loss: 2.049710; validation set accu: 0.669300
Epoch: 3 Current loss: 1.993560; validation set accu: 0.728900
Epoch: 4 Current loss: 1.959818; validation set accu: 0.774500
Epoch: 5 Current loss: 1.945992; validation set accu: 0.757600
Epoch: 6 Current loss: 1.930599; validation set accu: 0.809600
Epoch: 7 Current loss: 1.911803; validation set accu: 0.837200
Epoch: 8 Current loss: 1.904754; validation set accu: 0.842100
Epoch: 9 Current loss: 1.903705; validation set accu: 0.846400
Epoch: 10 Current loss: 1.903911; validation set accu: 0.848100 method 4
Epoch: 1 Current loss: 0.624240; validation set accu: 0.924900
Epoch: 2 Current loss: 0.213469; validation set accu: 0.948500
Epoch: 3 Current loss: 0.156797; validation set accu: 0.959800
Epoch: 4 Current loss: 0.126438; validation set accu: 0.963900
Epoch: 5 Current loss: 0.106664; validation set accu: 0.965900
Epoch: 6 Current loss: 0.094166; validation set accu: 0.967200
Epoch: 7 Current loss: 0.084848; validation set accu: 0.971200
Epoch: 8 Current loss: 0.077244; validation set accu: 0.971800
Epoch: 9 Current loss: 0.071417; validation set accu: 0.973300
Epoch: 10 Current loss: 0.065737; validation set accu: 0.971600 取消注释
Epoch: 1 Current loss: 2.178319; validation set accu: 0.542200
Epoch: 2 Current loss: 2.031493; validation set accu: 0.648700
Epoch: 3 Current loss: 1.982282; validation set accu: 0.703700
Epoch: 4 Current loss: 1.956709; validation set accu: 0.762700
Epoch: 5 Current loss: 1.927590; validation set accu: 0.808100
Epoch: 6 Current loss: 1.924535; validation set accu: 0.817200
Epoch: 7 Current loss: 1.911364; validation set accu: 0.820100
Epoch: 8 Current loss: 1.898206; validation set accu: 0.855400
Epoch: 9 Current loss: 1.885394; validation set accu: 0.836500
Epoch: 10 Current loss: 1.880787; validation set accu: 0.870200 method 5 Epoch: 1 Current loss: 0.619814; validation set accu: 0.924300
Epoch: 2 Current loss: 0.232870; validation set accu: 0.948800
Epoch: 3 Current loss: 0.172606; validation set accu: 0.954900
Epoch: 4 Current loss: 0.137763; validation set accu: 0.961800
Epoch: 5 Current loss: 0.116268; validation set accu: 0.967700
Epoch: 6 Current loss: 0.101985; validation set accu: 0.968800
Epoch: 7 Current loss: 0.091154; validation set accu: 0.970900
Epoch: 8 Current loss: 0.083219; validation set accu: 0.972700
Epoch: 9 Current loss: 0.074921; validation set accu: 0.972800
Epoch: 10 Current loss: 0.070208; validation set accu: 0.972800 取消注释,同时注释上面一行 Epoch: 1 Current loss: 2.161032; validation set accu: 0.497500
Epoch: 2 Current loss: 2.027255; validation set accu: 0.690900
Epoch: 3 Current loss: 1.972939; validation set accu: 0.767600
Epoch: 4 Current loss: 1.940982; validation set accu: 0.766000
Epoch: 5 Current loss: 1.933135; validation set accu: 0.812800
Epoch: 6 Current loss: 1.913039; validation set accu: 0.799300
Epoch: 7 Current loss: 1.896871; validation set accu: 0.848800
Epoch: 8 Current loss: 1.899655; validation set accu: 0.854400
Epoch: 9 Current loss: 1.889465; validation set accu: 0.845700
Epoch: 10 Current loss: 1.878703; validation set accu: 0.846400
170301更新结束
=========================================================
(原)torch的训练过程的更多相关文章
- 万字长文,以代码的思想去详细讲解yolov3算法的实现原理和训练过程,Visdrone数据集实战训练
以代码的思想去详细讲解yolov3算法的实现原理和训练过程,并教使用visdrone2019数据集和自己制作数据集两种方式去训练自己的pytorch搭建的yolov3模型,吐血整理万字长文,纯属干货 ...
- TensorFlow从1到2(七)线性回归模型预测汽车油耗以及训练过程优化
线性回归模型 "回归"这个词,既是Regression算法的名称,也代表了不同的计算结果.当然结果也是由算法决定的. 不同于前面讲过的多个分类算法或者逻辑回归,线性回归模型的结果是 ...
- 深度学习基础(CNN详解以及训练过程1)
深度学习是一个框架,包含多个重要算法: Convolutional Neural Networks(CNN)卷积神经网络 AutoEncoder自动编码器 Sparse Coding稀疏编码 Rest ...
- (转)理解YOLOv2训练过程中输出参数含义
最近有人问起在YOLOv2训练过程中输出在终端的不同的参数分别代表什么含义,如何去理解这些参数?本篇文章中我将尝试着去回答这个有趣的问题. 刚好现在我正在训练一个YOLOv2模型,拿这个真实的例子来讨 ...
- 理解YOLOv2训练过程中输出参数含义
原英文地址: https://timebutt.github.io/static/understanding-yolov2-training-output/ 最近有人问起在YOLOv2训练过程中输出在 ...
- visdom可视化pytorch训练过程
一.前言 在深度学习模型训练的过程中,常常需要实时监听并可视化一些数据,如损失值loss,正确率acc等.在Tensorflow中,最常使用的工具非Tensorboard莫属:在Pytorch中,也有 ...
- 深度学习笔记之关于基本思想、浅层学习、Neural Network和训练过程(三)
不多说,直接上干货! 五.Deep Learning的基本思想 假设我们有一个系统S,它有n层(S1,…Sn),它的输入是I,输出是O,形象地表示为: I =>S1=>S2=>….. ...
- 深度学习训练过程中的学习率衰减策略及pytorch实现
学习率是深度学习中的一个重要超参数,选择合适的学习率能够帮助模型更好地收敛. 本文主要介绍深度学习训练过程中的6种学习率衰减策略以及相应的Pytorch实现. 1. StepLR 按固定的训练epoc ...
- 【AdaBoost算法】强分类器训练过程
一.强分类器训练过程 算法原理如下(参考自VIOLA P, JONES M. Robust real time object detection[A] . 8th IEEE International ...
随机推荐
- 19个非常有用的 jQuery 图片滑动插件和教程
jQuery 是一个非常优秀的 Javascript 框架,使用简单灵活,同时还有许多成熟的插件可供选择.其中,最令人印象深刻的应用之一就是对图片的处理,它可以让帮助你在你的项目中加入精美的效果.今天 ...
- 文成小盆友python-num11-(2) python操作Memcache Redis
本部分主要内容: python操作memcache python操作redis 一.python 操作 memcache memcache是一套分布式的高速缓存系统,由LiveJournal的Brad ...
- 无效的过程调用或参数: 'Instr'解决方法
以前我一直使用ASP无组件上传类来上传文件.但是今天又个客户反映说.不能上传.出现错误.,但在我电脑上测试没问题.后来发现客户用的是IE8 于是开始找解决方法 错误如下:Microsoft VBScr ...
- laravel中StartSession中间件的问题
今天使用了laravel的dingoapi插件做了一些功能,但是最后遇到一个问题,我在页面和api的路由组中都加了一个相同的以session为基础的身份验证中间件,然后我以管理员身份登录页面时通过了验 ...
- Linux下使用QQ的几种方式
Linux下没有官方的QQ聊天应用,对于经常使用QQ与朋友同事沟通交流的小伙伴们来说肯定很不方便,在Linux下可以使用以下几种方法使用QQ: 1.wine qq for linux Ubuntu ...
- 关于nginx架构探究(4)
事件管理机制 Nginx是以事件驱动的,也就是说Nginx内部流程的向前推进基本都是靠各种事件的触发来驱动,否则Nginx将一直阻塞在函数epoll_wait()或suspend函数,Nginx事件一 ...
- 从汇编来看c语言之指针
一.基础研究 将下面的程序编译连接,用debug加载: 首先执行第一条语句: 发现p=(unsigned char *)0x1000;在这里是把1000赋给一个偏移地址为01af.大小为两字节的内存空 ...
- JSON stringify and parse
来源 : http://javascript.ruanyifeng.com/stdlib/date.html //解析json也可以传入一个方法, 基本上和stringify差不多,不过是逆序的, 要 ...
- 8051单片机I/O引脚工作原理
一.P0端口的结构及工作原理 P0端口8位中的一位结构图见下图: 由上图可见,P0端口由锁存器.输入缓冲器.切换开关.一个与非门.一个与门及场效应管驱动电路构成.再看图的右边,标号为P0.X引脚的图标 ...
- windows 查看文件被哪个进程占用
经常当我们删除文件时,有时会提示[操作无法完成,因为文件已在另一个程序中打开,请关闭该文件并重试],到底是哪些程序呢? 有时候一个一个找真不是办法,已经被这个问题折磨很久了,今天下决心要把它解决,找到 ...