- 重点掌握基本张量使用及与numpy的区别

- 掌握张量维度操作(拼接、维度扩展、压缩、转置、重复……)

numpy基本操作:

1. Tensorflow

Tensorflow 和numpy区别

相同点: 都提供n位数组
不同点: numpy支持ndarray,而Tensorflow里有tensor;numpy不提供创建张量函数和求导,也不提供GPU支持。

-tensorflow中tf.random_normal和tf.truncated_normal的区别

代码
a = tf.Variable(tf.random_normal([,],seed=))
b = tf.Variable(tf.truncated_normal([,],seed=))
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print(sess.run(a))
print(sess.run(b)) 输出:
[[-0.81131822 1.48459876]
[ 0.06532937 -2.44270396]]
[[-0.85811085 -0.19662298]
[ 0.13895047 -1.22127688]]

拼接函数:tf.concat()等价于torch.cat()

转置函数:torch中:对二维Tensor转置操作transpose(dim1,dim2)或者直接t();

对多维Tensor转置操作permute(dim1,dim2,...,dimn),

增加维度:TensorFlow中,想要维度增加一维,可以使用tf.expand_dims(input,dim,name=None)函数

Torch中,使用nn.unsqueeze(pos [,numInputDims])在pos位置上插入1.

2. Pytorch

import torch
import numpy as np np_data = np.arange().reshape((, )) # numpy 转为 pytorch格式 torch_data = torch.from_numpy(np_data)
print(
'\n numpy', np_data,
'\n torch', torch_data,
)
'''
numpy [[ ]
[ ]]
torch [torch.LongTensor of size 2x3]
'''
# torch 转为numpy
tensor2array = torch_data.numpy()
print(tensor2array)
"""
[[ ]
[ ]]
"""
# 运算符
# abs 、 add 、和numpy类似
data = [[, ], [, ]]
tensor = torch.FloatTensor(data) # 转为32位浮点数,torch接受的都是Tensor的形式,所以运算前先转化为Tensor
print(
'\n numpy', np.matmul(data, data),
'\n torch', torch.mm(tensor, tensor) # torch.dot()是点乘
)
'''
numpy [[ ]
[ ]]
torch [torch.FloatTensor of size 2x2]
'''

- pytorch0.3: x = Variable(torch.rand(5, 3, 224, 224), requires_grad=True).cuda()

# -*- coding: utf- -*-
"""
Neural Networks
https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py
=============== Neural networks can be constructed using the ``torch.nn`` package. Now that you had a glimpse of ``autograd``, ``nn`` depends on
``autograd`` to define models and differentiate them.
An ``nn.Module`` contains layers, and a method ``forward(input)``\ that
returns the ``output``. For example, look at this network that classifies digit images: .. figure:: /_static/img/mnist.png
:alt: convnet convnet It is a simple feed-forward network. It takes the input, feeds it
through several layers one after the other, and then finally gives the
output. A typical training procedure for a neural network is as follows: - Define the neural network that has some learnable parameters (or
weights)
- Iterate over a dataset of inputs
- Process input through the network
- Compute the loss (how far is the output from being correct)
- Propagate gradients back into the network’s parameters
- Update the weights of the network, typically using a simple update rule:
``weight = weight - learning_rate * gradient`` Define the network
------------------ Let’s define this network:
"""
import torch
import torch.nn as nn
import torch.nn.functional as F class Net(nn.Module): def __init__(self):
super(Net, self).__init__()
# input image channel, output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(, , )
self.conv2 = nn.Conv2d(, , )
# an affine operation: y = Wx + b
self.fc1 = nn.Linear( * * , )
self.fc2 = nn.Linear(, )
self.fc3 = nn.Linear(, ) def forward(self, x):
# Max pooling over a (, ) window
x = F.max_pool2d(F.relu(self.conv1(x)), (, ))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), )
x = x.view(-, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x def num_flat_features(self, x):
size = x.size()[:] # all dimensions except the batch dimension
num_features =
for s in size:
num_features *= s
return num_features net = Net()
print(net) ########################################################################
# You just have to define the ``forward`` function, and the ``backward``
# function (where gradients are computed) is automatically defined for you
# using ``autograd``.
# You can use any of the Tensor operations in the ``forward`` function.
#
# The learnable parameters of a model are returned by ``net.parameters()`` params = list(net.parameters())
print(len(params))
print(params[].size()) # conv1's .weight ########################################################################
# Let try a random 32x32 input.
# Note: expected input size of this net (LeNet) is 32x32. To use this net on
# MNIST dataset, please resize the images from the dataset to 32x32. input = torch.randn(, , , )
out = net(input)
print(out) ########################################################################
# Zero the gradient buffers of all parameters and backprops with random
# gradients:
net.zero_grad()
out.backward(torch.randn(, )) ########################################################################
# .. note::
#
# ``torch.nn`` only supports mini-batches. The entire ``torch.nn``
# package only supports inputs that are a mini-batch of samples, and not
# a single sample.
#
# For example, ``nn.Conv2d`` will take in a 4D Tensor of
# ``nSamples x nChannels x Height x Width``.
#
# If you have a single sample, just use ``input.unsqueeze()`` to add
# a fake batch dimension.
#
# Before proceeding further, let's recap all the classes you’ve seen so far.
#
# **Recap:**
# - ``torch.Tensor`` - A *multi-dimensional array* with support for autograd
# operations like ``backward()``. Also *holds the gradient* w.r.t. the
# tensor.
# - ``nn.Module`` - Neural network module. *Convenient way of
# encapsulating parameters*, with helpers for moving them to GPU,
# exporting, loading, etc.
# - ``nn.Parameter`` - A kind of Tensor, that is *automatically
# registered as a parameter when assigned as an attribute to a*
# ``Module``.
# - ``autograd.Function`` - Implements *forward and backward definitions
# of an autograd operation*. Every ``Tensor`` operation creates at
# least a single ``Function`` node that connects to functions that
# created a ``Tensor`` and *encodes its history*.
#
# **At this point, we covered:**
# - Defining a neural network
# - Processing inputs and calling backward
#
# **Still Left:**
# - Computing the loss
# - Updating the weights of the network
#
# Loss Function
# -------------
# A loss function takes the (output, target) pair of inputs, and computes a
# value that estimates how far away the output is from the target.
#
# There are several different
# `loss functions <https://pytorch.org/docs/nn.html#loss-functions>`_ under the
# nn package .
# A simple loss is: ``nn.MSELoss`` which computes the mean-squared error
# between the input and the target.
#
# For example: output = net(input)
target = torch.randn() # a dummy target, for example
target = target.view(, -) # make it the same shape as output
criterion = nn.MSELoss() loss = criterion(output, target)
print(loss) ########################################################################
# Now, if you follow ``loss`` in the backward direction, using its
# ``.grad_fn`` attribute, you will see a graph of computations that looks
# like this:
#
# ::
#
# input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
# -> view -> linear -> relu -> linear -> relu -> linear
# -> MSELoss
# -> loss
#
# So, when we call ``loss.backward()``, the whole graph is differentiated
# w.r.t. the loss, and all Tensors in the graph that has ``requires_grad=True``
# will have their ``.grad`` Tensor accumulated with the gradient.
#
# For illustration, let us follow a few steps backward: print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[][]) # Linear
print(loss.grad_fn.next_functions[][].next_functions[][]) # ReLU ########################################################################
# Backprop
# --------
# To backpropagate the error all we have to do is to ``loss.backward()``.
# You need to clear the existing gradients though, else gradients will be
# accumulated to existing gradients.
#
#
# Now we shall call ``loss.backward()``, and have a look at conv1's bias
# gradients before and after the backward. net.zero_grad() # zeroes the gradient buffers of all parameters print('conv1.bias.grad before backward')
print(net.conv1.bias.grad) loss.backward() print('conv1.bias.grad after backward')
print(net.conv1.bias.grad) ########################################################################
# Now, we have seen how to use loss functions.
#
# **Read Later:**
#
# The neural network package contains various modules and loss functions
# that form the building blocks of deep neural networks. A full list with
# documentation is `here <https://pytorch.org/docs/nn>`_.
#
# **The only thing left to learn is:**
#
# - Updating the weights of the network
#
# Update the weights
# ------------------
# The simplest update rule used in practice is the Stochastic Gradient
# Descent (SGD):
#
# ``weight = weight - learning_rate * gradient``
#
# We can implement this using simple python code:
#
# .. code:: python
#
# learning_rate = 0.01
# for f in net.parameters():
# f.data.sub_(f.grad.data * learning_rate)
#
# However, as you use neural networks, you want to use various different
# update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc.
# To enable this, we built a small package: ``torch.optim`` that
# implements all these methods. Using it is very simple: import torch.optim as optim # create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01) # in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update ###############################################################
# .. Note::
#
# Observe how gradient buffers had to be manually set to zero using
# ``optimizer.zero_grad()``. This is because gradients are accumulated
# as explained in `Backprop`_ section.

我们会经常摆弄数据的维度,有时候是扩展(cat,expand),有时候又要压缩裁剪(squeeze,view),所以这些操纵向量维度的方法就尤其重要了,我把这些方法总结到这里,以供自己和朋友们参考。

涉及到的方法有:

torch.cat() torch.Tensor.expand()

torch.squeeze() torch.Tensor.repeat()

torch.Tensor.narrow() torch.Tensor.view()

torch.Tensor.resize_() torch.Tensor.permute()

另前三期总结的传送门:

4钟生成随机数Tensor的方法总结

Tensor常用数学运算

Tensor比大小

3.MXNet

-   MXNet for PyTorch users in 10 minutes : https://beta.mxnet.io/guide/to-mxnet/pytorch.html

- 一种原生API,一种Gluon API

- 原生API mnist实现: https://mxnet.incubator.apache.org/versions/master/tutorials/python/mnist.html

numpy和tensor的转换

tensorlfow
numpy转tensor
a = np.zeros((, ))
ta = tf.convert_to_tensor(a)
with tf.Session() as sess:
print(sess.run(ta)) tensor转numpy
import tensorflow as tf
img1 = tf.constant(value=[[[[],[],[],[]],[[],[],[],[]],[[],[],[],[]],[[],[],[],[]]]],dtype=tf.float32)
img2 = tf.constant(value=[[[[],[],[],[]],[[],[],[],[]],[[],[],[],[]],[[],[],[],[]]]],dtype=tf.float32)
img = tf.concat(values=[img1,img2],axis=)
sess=tf.Session()
#sess.run(tf.initialize_all_variables())
sess.run(tf.global_variables_initializer())
print("out1=",type(img))
#转化为numpy数组
#通过.eval函数可以把tensor转化为numpy类数据
img_numpy=img.eval(session=sess)
print("out2=",type(img_numpy))
#转化为tensor
img_tensor= tf.convert_to_tensor(img_numpy)
print("out2=",type(img_tensor)) mxnet
from mxnet import nd
x = nd.ones((,))
a = x.asnumpy()
(type(a), a)
nd.array(a) pytorch
import torch
import numpy as np
np_data = np.arange().reshape((, ))
torch_data = torch.from_numpy(np_data)
tensor2array = torch_data.numpy()
print(
'\nnumpy array:', np_data, # [[ ], [ ]]
'\ntorch tensor:', torch_data, # \n [torch.LongTensor of size 2x3]
'\ntensor to array:', tensor2array, # [[ ], [ ]]
)

深度学习框架Tensor张量的操作使用的更多相关文章

  1. Reading | 《TensorFlow:实战Google深度学习框架》

    目录 三.TensorFlow入门 1. TensorFlow计算模型--计算图 I. 计算图的概念 II. 计算图的使用 2.TensorFlow数据类型--张量 I. 张量的概念 II. 张量的使 ...

  2. tensorflow(深度学习框架)详细讲解及实战

    还未完全写完,本人会一直持续更新!~ 各大深度学习框架总结和比较 各个开源框架在GitHub上的数据统计,如下表: 主流深度学习框架在各个维度的评分,如下表: Caffe可能是第一个主流的工业级深度学 ...

  3. 深度学习框架TensorFlow在Kubernetes上的实践

    什么是TensorFlow TensorFlow是谷歌在去年11月份开源出来的深度学习框架.开篇我们提到过AlphaGo,它的开发团队DeepMind已经宣布之后的所有系统都将基于TensorFlow ...

  4. 深度学习框架集成平台C++ Guide指南

    深度学习框架集成平台C++ Guide指南 这个指南详细地介绍了神经网络C++的API,并介绍了许多不同的方法来处理模型. 提示 所有框架运行时接口都是相同的,因此本指南适用于所有受支持框架(包括Te ...

  5. Cs231n课堂内容记录-Lecture 8 深度学习框架

    Lecture 8  Deep Learning Software 课堂笔记参见:https://blog.csdn.net/u012554092/article/details/78159316 今 ...

  6. [Tensorflow实战Google深度学习框架]笔记4

    本系列为Tensorflow实战Google深度学习框架知识笔记,仅为博主看书过程中觉得较为重要的知识点,简单摘要下来,内容较为零散,请见谅. 2017-11-06 [第五章] MNIST数字识别问题 ...

  7. 从TensorFlow 到 Caffe2:盘点深度学习框架

    机器之心报道 本文首先介绍GitHub中最受欢迎的开源深度学习框架排名,然后再对其进行系统地对比 下图总结了在GitHub中最受欢迎的开源深度学习框架排名,该排名是基于各大框架在GitHub里的收藏数 ...

  8. 转:TensorFlow和Caffe、MXNet、Keras等其他深度学习框架的对比

    http://geek.csdn.net/news/detail/138968 Google近日发布了TensorFlow 1.0候选版,这第一个稳定版将是深度学习框架发展中的里程碑的一步.自Tens ...

  9. 28款GitHub最流行的开源机器学习项目,推荐GitHub上10 个开源深度学习框架

    20 个顶尖的 Python 机器学习开源项目 机器学习 2015-06-08 22:44:30 发布 您的评价: 0.0 收藏 1收藏 我们在Github上的贡献者和提交者之中检查了用Python语 ...

随机推荐

  1. 创建自己的composer包

    需求:在项目中输入 p($arr); 将会格式化输出 一.在GitHub上创建仓库 1.1这个仓库必须包含composer.json文件,内容如下. { "name": " ...

  2. Codeforces 873F Forbidden Indices 字符串 SAM/(SA+单调栈)

    原文链接https://www.cnblogs.com/zhouzhendong/p/9256033.html 题目传送门 - CF873F 题意 给定长度为 $n$ 的字符串 $s$,以及给定这个字 ...

  3. Python中type和object

    type  所有类是type生成的 a = 1 b = "abc" print("type a:{}".format(type(a))) print(" ...

  4. Machine Learning 算法可视化实现1 - 线性回归

    一.原理和概念 1.回归 回归最简单的定义是,给出一个点集D,用一个函数去拟合这个点集.而且使得点集与拟合函数间的误差最小,假设这个函数曲线是一条直线,那就被称为线性回归:假设曲线是一条二次曲线,就被 ...

  5. es6的模块化--AMD/CMD/commonJS/ES6

    , ); ); }) , )); }); , )); ; export { firstName, lastName, year }; // es6引用 import { firstName, last ...

  6. POJ2387 Til the Cows Come Home 【Dijkstra】

    题目链接:http://poj.org/problem?id=2387 题目大意; 题意:给出两个整数T,N,然后输入一些点直接的距离,求N和1之间的最短距离.. 思路:dijkstra求单源最短路, ...

  7. 大数据小白系列——HDFS(4)

    这里是大数据小白系列,这是本系列的第四篇,来看一个真实世界Hadoop集群的规模,以及我们为什么需要Hadoop Federation. 首先,我们先要来个直观的印象,这是你以为的Hadoop集群: ...

  8. Java经典面试题(一)

    1.在 Java 中类的定义在 Java 中,类是用于创建对象和定义数据类型的模板. 它充当面向 Java 语言的系统的构建块.2.静态加载和动态加载有什么区别?静态类加载涉及使用新关键字来创建对象和 ...

  9. java项目中Excel文件的导入导出

    package poi.excel; import java.io.IOException; import java.io.InputStream; import java.io.OutputStre ...

  10. c++中static变量有什么用

    主要有两点用途. 1.让一个变量长期有效,而不管其是在什么地方被申明.比如: int fun1() { static int s_value = 0; .... } 那么fun1不管在什么地方被调用, ...