PyTorch学习笔记之Tensors
PyTorch Tensors are just like numpy arrays, but they can run on GPU.No built-in notion of computational graph, or gradients, or deep learning.Here we fit a two-layer net using PyTorch Tensors:
- import torch
- dtype = torch.FloatTensor
- # step 1: create random tensors for data and weights
- N, D_in, H, D_out = 64, 1000, 100, 10
- x = torch.randn(N, D_in).type(dtype)
- # print(x) # [torch.FloatTensor of size 64x1000]
- y = torch.randn(N, D_out).type(dtype)
- # print(y) # [torch.FloatTensor of size 64x10]
- w1 = torch.randn(D_in, H).type(dtype)
- # print(w1) # [torch.FloatTensor of size 1000x100]
- w2 = torch.randn(H, D_out).type(dtype)
- # print(w2) # [torch.FloatTensor of size 100x10]
step 2
- learning_rate = 1e-6
- for t in range(1):
- # step 2: Forward pass: compute predictions and loss
- h = x.mm(w1) # mm is which function ?
- # print(h) # [torch.FloatTensor of size 64x100]
- # print(x.mul(w1)) # RuntimeError: inconsistent tensor size
- h_relu = h.clamp(min=0) # clamp(myValue, min, max)
- # print(h_relu) # [torch.FloatTensor of size 64x100]
- y_pred = h_relu.mm(w2)
- # print(y_pred) # [torch.FloatTensor of size 64x10]
- loss = (y_pred - y).pow(2).sum()
- # print((y_pred - y).pow(2)) # pow() 方法返回 xy(x的y次方) 的值。
- # print(loss) # 30832366.024527483
- # define function clamp
- # def clamp(minvalue, value, maxvalue):
- # return max(minvalue, min(value, maxvalue))
- ''' h
- 6.2160e+00 -1.0304e+01 -2.1468e+01 ... 1.9651e+01 1.7158e+01 1.3336e+01
- 5.8056e+01 2.6900e+01 2.2681e+01 ... -3.0021e+01 -4.7533e+01 3.7371e+01
- -1.6430e+01 -4.1532e+01 2.7384e+01 ... -3.2225e+01 -1.9597e+01 5.8636e+01
- ... ⋱ ...
- 9.2964e+00 6.5791e+01 1.8076e+01 ... 2.4620e+01 2.3355e+01 4.4987e-01
- 3.7563e+01 -2.6666e+01 3.5643e+01 ... 3.0626e+01 3.0002e+01 -1.3277e+01
- -4.2287e+01 3.3466e+01 3.8845e+01 ... 2.1715e+01 -3.3691e+01 -2.5290e+01
- [torch.FloatTensor of size 64x100]
- h_relu
- 6.2160 0.0000 0.0000 ... 19.6511 17.1578 13.3358
- 58.0565 26.8997 22.6810 ... 0.0000 0.0000 37.3708
- 0.0000 0.0000 27.3841 ... 0.0000 0.0000 58.6358
- ... ⋱ ...
- 9.2964 65.7915 18.0760 ... 24.6199 23.3550 0.4499
- 37.5627 0.0000 35.6430 ... 30.6257 30.0016 0.0000
- 0.0000 33.4656 38.8449 ... 21.7154 0.0000 0.0000
- [torch.FloatTensor of size 64x100]
- '''
step 3
- for t in range(500):
- # step 2: Forward pass: compute predictions and loss
- h = x.mm(w1) # [torch.FloatTensor of size 64x100]
- h_relu = h.clamp(min=0) # clamp(myValue, min, max)
- # h_relu [torch.FloatTensor of size 64x100]
- y_pred = h_relu.mm(w2) # # [torch.FloatTensor of size 64x10]
- loss = (y_pred - y).pow(2).sum() # 30832366.024527483
- # step 3: Backward pass: manually compute gradients
- grad_y_pred = 2.0 * (y_pred - y) # [torch.FloatTensor of size 64x10]
- grad_w2 = h_relu.t().mm(grad_y_pred) # .t()转置
- grad_h_relu = grad_y_pred.mm(w2.t()) # [torch.FloatTensor of size 64x100]
- grad_h = grad_h_relu.clone() # the same as
- grad_h[h < 0] = 0
- grad_w1 = x.t().mm(grad_h) # [torch.FloatTensor of size 1000x100]
- # print(h_relu)
- # print(h_relu.t())
- '''
- 0.0000 14.8044 0.0000 ... 0.0000 38.3654 0.0000
- 21.3853 0.0000 27.1789 ... 14.8747 14.6064 0.0000
- 33.8491 0.0000 0.0000 ... 26.2651 11.5845 0.0000
- ... ⋱ ...
- 11.2708 0.0000 0.0000 ... 0.0000 4.2082 0.0000
- 0.0000 0.0000 0.0000 ... 2.6930 5.6134 47.2977
- 0.0000 37.3445 0.0000 ... 31.3511 0.0000 64.6182
- [torch.FloatTensor of size 64x100]
- 0.0000 21.3853 33.8491 ... 11.2708 0.0000 0.0000
- 14.8044 0.0000 0.0000 ... 0.0000 0.0000 37.3445
- 0.0000 27.1789 0.0000 ... 0.0000 0.0000 0.0000
- ... ⋱ ...
- 0.0000 14.8747 26.2651 ... 0.0000 2.6930 31.3511
- 38.3654 14.6064 11.5845 ... 4.2082 5.6134 0.0000
- 0.0000 0.0000 0.0000 ... 0.0000 47.2977 64.6182
- [torch.FloatTensor of size 100x64]
- '''
- # print(grad_h)
- # grad_h[h < 0] = 0
- # print(grad_h)
- '''
- -3.9989e+02 -9.3610e+02 -3.9592e+02 ... -1.0868e+03 6.9429e+02 3.3026e+02
- 9.4933e+02 1.2244e+03 2.4054e+02 ... 9.1655e+02 1.3783e+03 2.2368e+02
- 4.1473e+03 3.6368e+03 -3.2277e+02 ... 2.9705e+02 3.9689e+03 1.0691e+03
- ... ⋱ ...
- 1.2205e+03 -4.0321e+02 8.4314e+02 ... 1.0697e+03 1.0149e+02 -4.6613e+02
- 6.0660e+02 5.5411e+02 2.0111e+03 ... -7.9235e+02 7.9334e+02 -9.1837e+01
- 1.3468e+03 2.4743e+03 -3.9460e+02 ... 1.1505e+03 1.5951e+03 7.3752e+02
- [torch.FloatTensor of size 64x100]
- 0.0000 0.0000 -395.9182 ... -1086.8199 0.0000 0.0000
- 949.3327 0.0000 240.5419 ... 0.0000 0.0000 223.6831
- 4147.3193 0.0000 0.0000 ... 297.0452 3968.9290 0.0000
- ... ⋱ ...
- 1220.4922 0.0000 843.1447 ... 1069.6855 101.4936 0.0000
- 0.0000 554.1067 2011.1219 ... -792.3494 0.0000 -91.8371
- 1346.8444 2474.3076 0.0000 ... 0.0000 1595.0582 737.5197
- [torch.FloatTensor of size 64x100]
- '''
step 4
- # step 4: Gradient descent step on weights
- w1 -= learning_rate * grad_w1 # [torch.FloatTensor of size 1000x100]
- w2 -= learning_rate * grad_w2 # [torch.FloatTensor of size 100x10]
PyTorch学习笔记之Tensors的更多相关文章
- PyTorch学习笔记之Tensors 2
Tensors的一些应用 ''' Tensors和numpy中的ndarrays较为相似, 因此Tensor也能够使用GPU来加速运算 ''' # from _future_ import print ...
- 【pytorch】pytorch学习笔记(一)
原文地址:https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html 什么是pytorch? pytorch是一个基于p ...
- [PyTorch 学习笔记] 1.3 张量操作与线性回归
本章代码:https://github.com/zhangxiann/PyTorch_Practice/blob/master/lesson1/linear_regression.py 张量的操作 拼 ...
- Pytorch学习笔记(二)---- 神经网络搭建
记录如何用Pytorch搭建LeNet-5,大体步骤包括:网络的搭建->前向传播->定义Loss和Optimizer->训练 # -*- coding: utf-8 -*- # Al ...
- Pytorch学习笔记(一)---- 基础语法
书上内容太多太杂,看完容易忘记,特此记录方便日后查看,所有基础语法以代码形式呈现,代码和注释均来源与书本和案例的整理. # -*- coding: utf-8 -*- # All codes and ...
- 【深度学习】Pytorch 学习笔记
目录 Pytorch Leture 05: Linear Rregression in the Pytorch Way Logistic Regression 逻辑回归 - 二分类 Lecture07 ...
- Pytorch学习笔记(一)——简介
一.Tensor Tensor是Pytorch中重要的数据结构,可以认为是一个高维数组.Tensor可以是一个标量.一维数组(向量).二维数组(矩阵)或者高维数组等.Tensor和numpy的ndar ...
- [PyTorch 学习笔记] 1.1 PyTorch 简介与安装
PyTorch 的诞生 2017 年 1 月,FAIR(Facebook AI Research)发布了 PyTorch.PyTorch 是在 Torch 基础上用 python 语言重新打造的一款深 ...
- [PyTorch 学习笔记] 1.4 计算图与动态图机制
本章代码:https://github.com/zhangxiann/PyTorch_Practice/blob/master/lesson1/computational_graph.py 计算图 深 ...
随机推荐
- socketserver的使用
socketserver底层也是使用线程实现的并发,直接上代码 # server import socketserver ''' socketserver使用模式: 1 功能类 class Myser ...
- hiho 1050 树的直径
#1050 : 树中的最长路 时间限制:10000ms 单点时限:1000ms 内存限制:256MB 描述 上回说到,小Ho得到了一棵二叉树玩具,这个玩具是由小球和木棍连接起来的,而在拆拼它的过程中, ...
- Monkey与MonkeyRunner之间的区别
为了支持黑盒自动化测试的场景,Android SDK提供了monkey和monkeyrunner两个测试工具,这两个测试工具除了名字类似外,还都可以向待测应用发送按键等消息,往往容易产生混淆,以下是他 ...
- Markdown,后缀MD
Markdown 算是一门新兴语言,现在 7-8 岁了吧.它设计的初衷就是让写字的人专注于写字,用纯文本简单的符号标记格式,最后再通过工具转换成鬼畜的 HTML/XHTML.如果你玩过 wikiped ...
- 十分钟了解socket
socket通讯方式------socket是TCP/IP协议的网络数据通讯接口(一种底层的通讯的方式).socket是IP地址和端口号的组合.例如:192.168.1.100:8080 原理就是TC ...
- python学习-- 理解'*','*args','**','**kwargs'
刚开始学习Python的时候,对有关args,kwargs,和*的使用感到很困惑.相信对此感到疑惑的人也有很多.我打算通过这个帖子来排解这个疑惑(希望能减少疑惑). 让我们通过以下5步来理解: 1. ...
- TensorFlow——热身运动:简单的线性回归
过程: 先用numpy建立100个数据点,再用梯度下滑工具来拟合,得到完美的回归线. # _*_coding:utf-8_*_ import tensorflow as tf import numpy ...
- JAVA-两种后台页面跳转方式
1.请求转发 RequestDispatcher rd = request.getRequestDispatcher("url"); rd.forward(request, res ...
- oracle无参数和带参数的存储过程实例
SQL中调用存储过程语句:call procedure_name(); 注:调用时”()”是不可少的,无论是有参数还是无参数. 定义对数据库存储过程的调用时1.无参数存储过程:{call proced ...
- springmvc始终跳转至首页,不报404错误(续)
上篇博客说到,当我执行程序时,springmvc的控制下,它始终跳转到首页,而不正常跳转.当时通过换一个服务器解决了问题,以为是缓存的事儿.但后来又发生了同样的事儿,顿时感觉出事儿了.就立马降低了日志 ...