optim.py cs231n
n如果有错误,欢迎指出,不胜感激
import numpy as np """
This file implements various first-order update rules that are commonly used for
training neural networks. Each update rule accepts current weights and the
gradient of the loss with respect to those weights and produces the next set of
weights. Each update rule has the same interface: def update(w, dw, config=None): Inputs:
- w: A numpy array giving the current weights.
- dw: A numpy array of the same shape as w giving the gradient of the
loss with respect to w.
- config: A dictionary containing hyperparameter values such as learning rate,
momentum, etc. If the update rule requires caching values over many
iterations, then config will also hold these cached values. Returns:
- next_w: The next point after the update.
- config: The config dictionary to be passed to the next iteration of the
update rule. NOTE: For most update rules, the default learning rate will probably not perform
well; however the default values of the other hyperparameters should work well
for a variety of different problems. For efficiency, update rules may perform in-place updates, mutating w and
setting next_w equal to w.
""" def sgd(w, dw, config=None):
"""
Performs vanilla stochastic gradient descent. config format:
- learning_rate: Scalar learning rate.
"""
if config is None: config = {}
config.setdefault('learning_rate', 1e-2)
w -= config['learning_rate'] * dw
return w, config def sgd_momentum(w, dw, config=None):
"""
Performs stochastic gradient descent with momentum. config format:
- learning_rate: Scalar learning rate.
- momentum: Scalar between 0 and 1 giving the momentum value.
Setting momentum = 0 reduces to sgd.
- velocity: A numpy array of the same shape as w and dw used to store a moving
average of the gradients.
"""
if config is None: config = {}
config.setdefault('learning_rate', 1e-2)
config.setdefault('momentum', 0.9)
v = config.get('velocity', np.zeros_like(w)) next_w = None
v=v*config['momentum']-config['learning_rate']*dw
next_w=w+v
config['velocity'] = v return next_w, config def rmsprop(x, dx, config=None):
"""
Uses the RMSProp update rule, which uses a moving average of squared gradient
values to set adaptive per-parameter learning rates. config format:
- learning_rate: Scalar learning rate.
- decay_rate: Scalar between 0 and 1 giving the decay rate for the squared
gradient cache.
- epsilon: Small scalar used for smoothing to avoid dividing by zero.
- cache: Moving average of second moments of gradients.
"""
if config is None: config = {}
config.setdefault('learning_rate', 1e-2)
config.setdefault('decay_rate', 0.99)
config.setdefault('epsilon', 1e-8)
config.setdefault('cache', np.zeros_like(x)) next_x = None cache=config['cache']*config['decay_rate']+(1-config['decay_rate'])*dx**2
next_x=x-config['learning_rate']*dx/np.sqrt(cache+config['epsilon'])
config['cache']=cache return next_x, config def adam(x, dx, config=None):
"""
Uses the Adam update rule, which incorporates moving averages of both the
gradient and its square and a bias correction term. config format:
- learning_rate: Scalar learning rate.
- beta1: Decay rate for moving average of first moment of gradient.
- beta2: Decay rate for moving average of second moment of gradient.
- epsilon: Small scalar used for smoothing to avoid dividing by zero.
- m: Moving average of gradient.
- v: Moving average of squared gradient.
- t: Iteration number.
"""
if config is None: config = {}
config.setdefault('learning_rate', 1e-3)
config.setdefault('beta1', 0.9)
config.setdefault('beta2', 0.999)
config.setdefault('epsilon', 1e-8)
config.setdefault('m', np.zeros_like(x))
config.setdefault('v', np.zeros_like(x))
config.setdefault('t', 0)
config['t']+=1
这个方法比较综合,各种方法的好处吧
m=config['beta1']*config['m']+(1-config['beta1'])*dx # now to change by acc
v=config['beta2']*config['v']+(1-config['beta2'])*dx**2
config['m']=m
config['v']=v
m=m/(1-config['beta1']**config['t'])
v=v/(1-config['beta2']**config['t']) next_x=x-config['learning_rate']*m/np.sqrt(v+config['epsilon']) return next_x, config
n
optim.py cs231n的更多相关文章
- cnn.py cs231n
n import numpy as np from cs231n.layers import * from cs231n.fast_layers import * from cs231n.layer_ ...
- fc_net.py cs231n
n如果有错误,欢迎指出,不胜感激 import numpy as np from cs231n.layers import * from cs231n.layer_utils import * cla ...
- layers.py cs231n
如果有错误,欢迎指出,不胜感激. import numpy as np def affine_forward(x, w, b): 第一个最简单的 affine_forward简单的前向传递,返回 ou ...
- 笔记:CS231n+assignment2(作业二)(一)
第二个作业难度很高,但做(抄)完之后收获还是很大的.... 一.Fully-Connected Neural Nets 首先是对之前的神经网络的程序进行重构,目的是可以构建任意大小的全连接的neura ...
- 深度学习原理与框架-神经网络-cifar10分类(代码) 1.np.concatenate(进行数据串接) 2.np.hstack(将数据横着排列) 3.hasattr(判断.py文件的函数是否存在) 4.reshape(维度重构) 5.tanspose(维度位置变化) 6.pickle.load(f文件读入) 7.np.argmax(获得最大值索引) 8.np.maximum(阈值比较)
横1. np.concatenate(list, axis=0) 将数据进行串接,这里主要是可以将列表进行x轴获得y轴的串接 参数说明:list表示需要串接的列表,axis=0,表示从上到下进行串接 ...
- optim.py-使用tensorflow实现一般优化算法
optim.py Project URL:https://github.com/Codsir/optim.git Based on: tensorflow, numpy, copy, inspect ...
- 深度学习之卷积神经网络(CNN)详解与代码实现(一)
卷积神经网络(CNN)详解与代码实现 本文系作者原创,转载请注明出处:https://www.cnblogs.com/further-further-further/p/10430073.html 目 ...
- 深度学习原理与框架-卷积神经网络-cifar10分类(图片分类代码) 1.数据读入 2.模型构建 3.模型参数训练
卷积神经网络:下面要说的这个网络,由下面三层所组成 卷积网络:卷积层 + 激活层relu+ 池化层max_pool组成 神经网络:线性变化 + 激活层relu 神经网络: 线性变化(获得得分值) 代码 ...
- Pytorch1.3源码解析-第一篇
pytorch$ tree -L 1 . ├── android ├── aten ├── benchmarks ├── binaries ├── c10 ├── caffe2 ├── CITATIO ...
随机推荐
- mybatis 中 if else 用法
mybaits 中没有 else 要用 chose when otherwise 代替 下面就是MyBatis中的if....else...表示方法 <choose> <when t ...
- HBase 物理视图
- 阿里云 Aliplayer高级功能介绍(八):安全播放
基本介绍 如何保障视频内容的安全,不被盗链.非法下载和传播,阿里云视频点播已经有一套完善的机制保障视频的安全播放: 更多详细内容查看点播内容安全播放,H5的Aliplayer对于上面的安全机制都是支持 ...
- Django项目:CRM(客户关系管理系统)--69--59PerfectCRM实现king_admin行内编辑
#base_admin.py # ————————24PerfectCRM实现King_admin自定义操作数据———————— from django.shortcuts import render ...
- 【DM642学习笔记四】flash烧写过程——错误记录…
(欢迎批评指正) 一,打开.cdd配置文件时出错: 解决:在FlashBurn配置窗口中,Conversion Cmd一栏可不用管: 菜单Program—Download FBTC,load ...
- 微端 打包更新工具 as air 分享
分享 微端,更新的是散包,不像端游,一个大包搞定. 更新须要每次用工具把资源的散文件.依据文件夹结构及时间 生成一个列表, 每次更新就是 文件夹及时间的比对! 该project能够翻译成 其它语言.有 ...
- Elasticsearch快速开始
Elasticsearch是一个分布式RESTful风格的搜索和数据分析引擎 查询:Elasticsearch允许执行和合并多种类型的搜索——结构化.非结构化.地理位置.度量指标.搜索方式随心而变 分 ...
- HDFS FsImage文件
- PHP1.9--数组
1.array_slice()函数作用是在数组中根据条件取出一段值并返回,如果数组有字符串键,所返回的数组将保留健名 array array_slice(array array ,int offset ...
- bzoj 3231 [Sdoi2008]递归数列——矩阵乘法
题目:https://www.lydsy.com/JudgeOnline/problem.php?id=3231 矩阵乘法裸题. 1018是10^18.别忘了开long long. #include& ...