Element-wise operations

An element-wise operation operates on corresponding elements between tensors.

Two tensors must have the same shape in order to perform element-wise operations on them.

Suppose we have the following two tensors(Both of these tensors are rank-2 tensors with a shape of 2 \(\times\) 2):

t1 = torch.tensor([
[1, 2],
[3, 4]
], dtype=torch.float32) t2 = torch.tensor([
[9, 8],
[7, 6]
], dtype=torch.float32)

The elements of the first axis are arrays and the elements of the second axis are numbers.

# Example of the first axis
> print(t1[0])
tensor([1., 2.]) # Example of the second axis
> print(t1[0][0])
tensor(1.)

Addition is an element-wise operation.

> t1 + t2
tensor([[10., 10.],
[10., 10.]])

In fact, all the arithmetic operations, add, subtract, multiply, and divide are element-wise operations. There are two ways we can do this:

  1. Using these symbolic operations:
> t + 2
tensor([[3., 4.],
[5., 6.]]) > t - 2
tensor([[-1., 0.],
[1., 2.]]) > t * 2
tensor([[2., 4.],
[6., 8.]]) > t / 2
tensor([[0.5000, 1.0000],
[1.5000, 2.0000]])
  1. Or equivalently, these built-in tensor methods:
> t.add(2)
tensor([[3., 4.],
[5., 6.]]) > t.sub(2)
tensor([[-1., 0.],
[1., 2.]]) > t.mul(2)
tensor([[2., 4.],
[6., 8.]]) > t.div(2)
tensor([[0.5000, 1.0000],
[1.5000, 2.0000]])

Broadcasting tensors

Broadcasting is the concept whose implementation allows us to add scalars to higher dimensional tensors.

We can see what the broadcasted scalar value looks like using the broadcast_to()Numpy function:

> np.broadcast_to(2, t.shape)
array([[2, 2],
[2, 2]])
//This means the scalar value is transformed into a rank-2 tensor just like t, and //just like that, the shapes match and the element-wise rule of having the same //shape is back in play.

Trickier example of broadcasting

t1 = torch.tensor([
[1, 1],
[1, 1]
], dtype=torch.float32) t2 = torch.tensor([2, 4], dtype=torch.float32)

Even through these two tensors have differing shapes, the element-wise operation is possible, and broadcasting is what makes the operation possible.

> np.broadcast_to(t2.numpy(), t1.shape)
array([[2., 4.],
[2., 4.]], dtype=float32) >t1 + t2
tensor([[3., 5.],
[3., 5.]])

When do we actually use broadcasting? We often need to use broadcasting when we are preprocessing our data, and especially during normalization routines.


Comparison operations are element-wise. For a given comparison operation between tensors, a new tensor of the same shape is returned with each element containing either a 0 or a 1.

> t = torch.tensor([
[0, 5, 0],
[6, 0, 7],
[0, 8, 0]
], dtype=torch.float32)

Let's check out some of the comparison operations.

> t.eq(0)
tensor([[1, 0, 1],
[0, 1, 0],
[1, 0, 1]], dtype=torch.uint8) > t.ge(0)
tensor([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]], dtype=torch.uint8) > t.gt(0)
tensor([[0, 1, 0],
[1, 0, 1],
[0, 1, 0]], dtype=torch.uint8) > t.lt(0)
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]], dtype=torch.uint8) > t.le(7)
tensor([[1, 1, 1],
[1, 1, 1],
[1, 0, 1]], dtype=torch.uint8)

Element-wise operations using functions

Here are some examples:

> t.abs()
tensor([[0., 5., 0.],
[6., 0., 7.],
[0., 8., 0.]]) > t.sqrt()
tensor([[0.0000, 2.2361, 0.0000],
[2.4495, 0.0000, 2.6458],
[0.0000, 2.8284, 0.0000]]) > t.neg()
tensor([[-0., -5., -0.],
[-6., -0., -7.],
[-0., -8., -0.]]) > t.neg().abs()
tensor([[0., 5., 0.],
[6., 0., 7.],
[0., 8., 0.]])

Element-wise operations的更多相关文章

  1. 向量的一种特殊乘法 element wise multiplication

    向量的一种特殊乘法 element wise multiplication 物体反射颜色的计算采用这样的模型: vec3 reflectionColor = objColor * lightColor ...

  2. [C2P1] Andrew Ng - Machine Learning

    About this Course Machine learning is the science of getting computers to act without being explicit ...

  3. TensorRT 3:更快的TensorFlow推理和Volta支持

    TensorRT 3:更快的TensorFlow推理和Volta支持 TensorRT 3: Faster TensorFlow Inference and Volta Support 英伟达Tens ...

  4. (转)A Beginner's Guide To Understanding Convolutional Neural Networks Part 2

    Adit Deshpande CS Undergrad at UCLA ('19) Blog About A Beginner's Guide To Understanding Convolution ...

  5. Understanding Convolution in Deep Learning

    Understanding Convolution in Deep Learning Convolution is probably the most important concept in dee ...

  6. Must Know Tips/Tricks in Deep Neural Networks

    Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)   Deep Neural Networks, especially C ...

  7. [Tensorflow] Cookbook - Neural Network

    In this chapter, we'll cover the following recipes: Implementing Operational Gates Working with Gate ...

  8. [Tensorflow] Cookbook - Object Classification based on CIFAR-10

    Convolutional Neural Networks (CNNs) are responsible for the major breakthroughs in image recognitio ...

  9. Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)

    http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html Deep Neural Networks, especially Conv ...

  10. [转]An Intuitive Explanation of Convolutional Neural Networks

    An Intuitive Explanation of Convolutional Neural Networks https://ujjwalkarn.me/2016/08/11/intuitive ...

随机推荐

  1. loj6157 A^B Problem (并查集)

    题目: https://loj.ac/problem/6157 分析: 这种树上异或,一般是采用分位考虑,但是这题即使分位,也会发现非常不好处理 这里考虑维护一个点到其根的路径的异或值 用并查集去检测 ...

  2. BCD工具类(8421)

    目录 1.BCD介绍 (1)BCD码(Binary-Coded Decimal)亦称二进码十进数.用4位二进制数来表示1位十进制数中的0~9这10个数码.用二进制编码的十进制代码. (2)BCD码可分 ...

  3. iOS macOS的后渗透利用工具:EggShell

    EggShell是一款基于Python编写的iOS和macOS的后渗透利用工具.它有点类似于metasploit,我们可以用它来创建payload建立侦听.此外,在反弹回的session会话也为我们提 ...

  4. GAN Generative Adversarial Network 生成式对抗网络-相关内容

    参考: https://baijiahao.baidu.com/s?id=1568663805038898&wfr=spider&for=pc Generative Adversari ...

  5. 【effective c++】模板与泛型编程

    模板元编程:在c++编译器内执行并于编译完成时停止执行 1.了解隐式接口和编译期多态 面向对象编程总是以显式接口(由函数名称.参数类型和返回类型构成)和运行期多态(虚函数)解决问题 模板及泛型编程:对 ...

  6. UVALive3211- Now or later(二分+2-SAT)

    题目链接 题意:有n架飞机.每架飞机都能够选择早着陆和晚着陆两种方式之中的一个,且必须选择一种. 任务就是安排全部飞机着陆时.相邻两个着陆时间间隔的最小值尽量大. 思路:用二分处理最小值尽量大.该题目 ...

  7. poj1904 二分图匹配+强连通分量

    http://poj.org/problem?id=1904 Description Once upon a time there lived a king and he had N sons. An ...

  8. Binder系列8—如何使用Binder(转)

    一.Native层Binder 源码结构: ClientDemo.cpp: 客户端程序 ServerDemo.cpp:服务端程序 IMyService.h:自定义的MyService服务的头文件 IM ...

  9. Selenium系列之--04 常见元素操作总结

    一.Selenium总共有八种定位方法  By.id()  通过id定位 By.name()  通过name 定位 By.xpath() 通过xpath定位 By.className() 通过clas ...

  10. ASP.NET MVC 学习笔记-2.Razor语法 ASP.NET MVC 学习笔记-1.ASP.NET MVC 基础 反射的具体应用 策略模式的具体应用 责任链模式的具体应用 ServiceStack.Redis订阅发布服务的调用 C#读取XML文件的基类实现

    ASP.NET MVC 学习笔记-2.Razor语法   1.         表达式 表达式必须跟在“@”符号之后, 2.         代码块 代码块必须位于“@{}”中,并且每行代码必须以“: ...