Element-wise operations
Element-wise operations
An element-wise operation operates on corresponding elements between tensors.
Two tensors must have the same shape in order to perform element-wise operations on them.
Suppose we have the following two tensors(Both of these tensors are rank-2 tensors with a shape of 2 \(\times\) 2):
t1 = torch.tensor([
[1, 2],
[3, 4]
], dtype=torch.float32)
t2 = torch.tensor([
[9, 8],
[7, 6]
], dtype=torch.float32)
The elements of the first axis are arrays and the elements of the second axis are numbers.
# Example of the first axis
> print(t1[0])
tensor([1., 2.])
# Example of the second axis
> print(t1[0][0])
tensor(1.)
Addition is an element-wise operation.
> t1 + t2
tensor([[10., 10.],
[10., 10.]])
In fact, all the arithmetic operations, add, subtract, multiply, and divide are element-wise operations. There are two ways we can do this:
- Using these symbolic operations:
> t + 2
tensor([[3., 4.],
[5., 6.]])
> t - 2
tensor([[-1., 0.],
[1., 2.]])
> t * 2
tensor([[2., 4.],
[6., 8.]])
> t / 2
tensor([[0.5000, 1.0000],
[1.5000, 2.0000]])
- Or equivalently, these built-in tensor methods:
> t.add(2)
tensor([[3., 4.],
[5., 6.]])
> t.sub(2)
tensor([[-1., 0.],
[1., 2.]])
> t.mul(2)
tensor([[2., 4.],
[6., 8.]])
> t.div(2)
tensor([[0.5000, 1.0000],
[1.5000, 2.0000]])
Broadcasting tensors
Broadcasting is the concept whose implementation allows us to add scalars to higher dimensional tensors.
We can see what the broadcasted scalar value looks like using the broadcast_to()Numpy function:
> np.broadcast_to(2, t.shape)
array([[2, 2],
[2, 2]])
//This means the scalar value is transformed into a rank-2 tensor just like t, and //just like that, the shapes match and the element-wise rule of having the same //shape is back in play.
Trickier example of broadcasting
t1 = torch.tensor([
[1, 1],
[1, 1]
], dtype=torch.float32)
t2 = torch.tensor([2, 4], dtype=torch.float32)
Even through these two tensors have differing shapes, the element-wise operation is possible, and broadcasting is what makes the operation possible.
> np.broadcast_to(t2.numpy(), t1.shape)
array([[2., 4.],
[2., 4.]], dtype=float32)
>t1 + t2
tensor([[3., 5.],
[3., 5.]])
When do we actually use broadcasting? We often need to use broadcasting when we are preprocessing our data, and especially during normalization routines.
Comparison operations are element-wise. For a given comparison operation between tensors, a new tensor of the same shape is returned with each element containing either a 0 or a 1.
> t = torch.tensor([
[0, 5, 0],
[6, 0, 7],
[0, 8, 0]
], dtype=torch.float32)
Let's check out some of the comparison operations.
> t.eq(0)
tensor([[1, 0, 1],
[0, 1, 0],
[1, 0, 1]], dtype=torch.uint8)
> t.ge(0)
tensor([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]], dtype=torch.uint8)
> t.gt(0)
tensor([[0, 1, 0],
[1, 0, 1],
[0, 1, 0]], dtype=torch.uint8)
> t.lt(0)
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]], dtype=torch.uint8)
> t.le(7)
tensor([[1, 1, 1],
[1, 1, 1],
[1, 0, 1]], dtype=torch.uint8)
Element-wise operations using functions
Here are some examples:
> t.abs()
tensor([[0., 5., 0.],
[6., 0., 7.],
[0., 8., 0.]])
> t.sqrt()
tensor([[0.0000, 2.2361, 0.0000],
[2.4495, 0.0000, 2.6458],
[0.0000, 2.8284, 0.0000]])
> t.neg()
tensor([[-0., -5., -0.],
[-6., -0., -7.],
[-0., -8., -0.]])
> t.neg().abs()
tensor([[0., 5., 0.],
[6., 0., 7.],
[0., 8., 0.]])
Element-wise operations的更多相关文章
- 向量的一种特殊乘法 element wise multiplication
向量的一种特殊乘法 element wise multiplication 物体反射颜色的计算采用这样的模型: vec3 reflectionColor = objColor * lightColor ...
- [C2P1] Andrew Ng - Machine Learning
About this Course Machine learning is the science of getting computers to act without being explicit ...
- TensorRT 3:更快的TensorFlow推理和Volta支持
TensorRT 3:更快的TensorFlow推理和Volta支持 TensorRT 3: Faster TensorFlow Inference and Volta Support 英伟达Tens ...
- (转)A Beginner's Guide To Understanding Convolutional Neural Networks Part 2
Adit Deshpande CS Undergrad at UCLA ('19) Blog About A Beginner's Guide To Understanding Convolution ...
- Understanding Convolution in Deep Learning
Understanding Convolution in Deep Learning Convolution is probably the most important concept in dee ...
- Must Know Tips/Tricks in Deep Neural Networks
Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei) Deep Neural Networks, especially C ...
- [Tensorflow] Cookbook - Neural Network
In this chapter, we'll cover the following recipes: Implementing Operational Gates Working with Gate ...
- [Tensorflow] Cookbook - Object Classification based on CIFAR-10
Convolutional Neural Networks (CNNs) are responsible for the major breakthroughs in image recognitio ...
- Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)
http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html Deep Neural Networks, especially Conv ...
- [转]An Intuitive Explanation of Convolutional Neural Networks
An Intuitive Explanation of Convolutional Neural Networks https://ujjwalkarn.me/2016/08/11/intuitive ...
随机推荐
- ATcoder 2000 Leftmost Ball
Problem Statement Snuke loves colorful balls. He has a total of N×K balls, K in each of his favorite ...
- Codeforces 920G(二分+容斥)
题意: 定义F(x,p)表示的是一个数列{y},其中gcd(y,p)=1且y>x 给出x,p,k,求出F(x,p)的第k项 x,p,k<=10^6 分析: 很容易想到先二分,再做差 然后问 ...
- com.sun.xxx.utils不存在问题的解决
com.sun.org.apache.xml.internal.security.utils does not exist问题的解决 在网上找个很多的答案,但我的问题没有解决,睡一晚上后,被我误打误撞 ...
- Spring的AOP AspectJ切入点语法详解(转)
一.Spring AOP支持的AspectJ切入点指示符 切入点指示符用来指示切入点表达式目的,在Spring AOP中目前只有执行方法这一个连接点,Spring AOP支持的AspectJ切入点指示 ...
- 一次mysql 优化 (Using temporary ; Using filesort)
遇到一个SQL执行很慢 SQL 如下: SELECT ... FROM tableA WHERE time >= 1492044535 and time <= 1492046335 GRO ...
- Python访问MySQL数据库并实现其增删改查功能
概述:对于访问MySQL数据库的操作,我想大家也都有一些了解.不过,因为最近在学习Python,以下就用Python来实现它.其中包括创建数据库和数据表.插入记录.删除记录.修改记录数据.查询数据.删 ...
- [转]JAVA异常
异常 异常就是导致程序中断执行的一段指令流. 在java中, 对于异常在API中也有明确的定义,叫做异常类. Error : JVM的错误, 程序中不进行处理, 交给虚拟机. Exception : ...
- Ribbon简介
Ribbon简介
- Friefox清除旧的网页缓存
Ctrl + F5 适用于调试网页编码时,不断以旧设置显示页面
- cocoapods应用第一部分-xcode创建.framework相关
问题的提出: 随着项目的越来越大,可能会出现好几个团队共同维护一个项目的情况,比如:项目组A负责当中的A块,项目组B负责当中的B块.....这几块彼此之间既独立,也相互联系.对于这样的情况,能够採用约 ...