Element-wise operations

An element-wise operation operates on corresponding elements between tensors.

Two tensors must have the same shape in order to perform element-wise operations on them.

Suppose we have the following two tensors(Both of these tensors are rank-2 tensors with a shape of 2 \(\times\) 2):

t1 = torch.tensor([
[1, 2],
[3, 4]
], dtype=torch.float32) t2 = torch.tensor([
[9, 8],
[7, 6]
], dtype=torch.float32)

The elements of the first axis are arrays and the elements of the second axis are numbers.

# Example of the first axis
> print(t1[0])
tensor([1., 2.]) # Example of the second axis
> print(t1[0][0])
tensor(1.)

Addition is an element-wise operation.

> t1 + t2
tensor([[10., 10.],
[10., 10.]])

In fact, all the arithmetic operations, add, subtract, multiply, and divide are element-wise operations. There are two ways we can do this:

  1. Using these symbolic operations:
> t + 2
tensor([[3., 4.],
[5., 6.]]) > t - 2
tensor([[-1., 0.],
[1., 2.]]) > t * 2
tensor([[2., 4.],
[6., 8.]]) > t / 2
tensor([[0.5000, 1.0000],
[1.5000, 2.0000]])
  1. Or equivalently, these built-in tensor methods:
> t.add(2)
tensor([[3., 4.],
[5., 6.]]) > t.sub(2)
tensor([[-1., 0.],
[1., 2.]]) > t.mul(2)
tensor([[2., 4.],
[6., 8.]]) > t.div(2)
tensor([[0.5000, 1.0000],
[1.5000, 2.0000]])

Broadcasting tensors

Broadcasting is the concept whose implementation allows us to add scalars to higher dimensional tensors.

We can see what the broadcasted scalar value looks like using the broadcast_to()Numpy function:

> np.broadcast_to(2, t.shape)
array([[2, 2],
[2, 2]])
//This means the scalar value is transformed into a rank-2 tensor just like t, and //just like that, the shapes match and the element-wise rule of having the same //shape is back in play.

Trickier example of broadcasting

t1 = torch.tensor([
[1, 1],
[1, 1]
], dtype=torch.float32) t2 = torch.tensor([2, 4], dtype=torch.float32)

Even through these two tensors have differing shapes, the element-wise operation is possible, and broadcasting is what makes the operation possible.

> np.broadcast_to(t2.numpy(), t1.shape)
array([[2., 4.],
[2., 4.]], dtype=float32) >t1 + t2
tensor([[3., 5.],
[3., 5.]])

When do we actually use broadcasting? We often need to use broadcasting when we are preprocessing our data, and especially during normalization routines.


Comparison operations are element-wise. For a given comparison operation between tensors, a new tensor of the same shape is returned with each element containing either a 0 or a 1.

> t = torch.tensor([
[0, 5, 0],
[6, 0, 7],
[0, 8, 0]
], dtype=torch.float32)

Let's check out some of the comparison operations.

> t.eq(0)
tensor([[1, 0, 1],
[0, 1, 0],
[1, 0, 1]], dtype=torch.uint8) > t.ge(0)
tensor([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]], dtype=torch.uint8) > t.gt(0)
tensor([[0, 1, 0],
[1, 0, 1],
[0, 1, 0]], dtype=torch.uint8) > t.lt(0)
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]], dtype=torch.uint8) > t.le(7)
tensor([[1, 1, 1],
[1, 1, 1],
[1, 0, 1]], dtype=torch.uint8)

Element-wise operations using functions

Here are some examples:

> t.abs()
tensor([[0., 5., 0.],
[6., 0., 7.],
[0., 8., 0.]]) > t.sqrt()
tensor([[0.0000, 2.2361, 0.0000],
[2.4495, 0.0000, 2.6458],
[0.0000, 2.8284, 0.0000]]) > t.neg()
tensor([[-0., -5., -0.],
[-6., -0., -7.],
[-0., -8., -0.]]) > t.neg().abs()
tensor([[0., 5., 0.],
[6., 0., 7.],
[0., 8., 0.]])

Element-wise operations的更多相关文章

  1. 向量的一种特殊乘法 element wise multiplication

    向量的一种特殊乘法 element wise multiplication 物体反射颜色的计算采用这样的模型: vec3 reflectionColor = objColor * lightColor ...

  2. [C2P1] Andrew Ng - Machine Learning

    About this Course Machine learning is the science of getting computers to act without being explicit ...

  3. TensorRT 3:更快的TensorFlow推理和Volta支持

    TensorRT 3:更快的TensorFlow推理和Volta支持 TensorRT 3: Faster TensorFlow Inference and Volta Support 英伟达Tens ...

  4. (转)A Beginner's Guide To Understanding Convolutional Neural Networks Part 2

    Adit Deshpande CS Undergrad at UCLA ('19) Blog About A Beginner's Guide To Understanding Convolution ...

  5. Understanding Convolution in Deep Learning

    Understanding Convolution in Deep Learning Convolution is probably the most important concept in dee ...

  6. Must Know Tips/Tricks in Deep Neural Networks

    Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)   Deep Neural Networks, especially C ...

  7. [Tensorflow] Cookbook - Neural Network

    In this chapter, we'll cover the following recipes: Implementing Operational Gates Working with Gate ...

  8. [Tensorflow] Cookbook - Object Classification based on CIFAR-10

    Convolutional Neural Networks (CNNs) are responsible for the major breakthroughs in image recognitio ...

  9. Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)

    http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html Deep Neural Networks, especially Conv ...

  10. [转]An Intuitive Explanation of Convolutional Neural Networks

    An Intuitive Explanation of Convolutional Neural Networks https://ujjwalkarn.me/2016/08/11/intuitive ...

随机推荐

  1. Codeforces 703C(计算几何)

    C. Chris and Road time limit per test 2 seconds memory limit per test 256 megabytes input standard i ...

  2. P2384 最短路 洛谷

    https://www.luogu.org/problem/show?pid=2384 题目背景 狗哥做烂了最短路,突然机智的考了Bosh一道,没想到把Bosh考住了...你能帮Bosh解决吗? 他会 ...

  3. 多校训练hdu --Nice boat(线段树,都是泪)

    Nice boat Time Limit: 30000/15000 MS (Java/Others) Memory Limit: 131072/131072 K (Java/Others) Total ...

  4. C#调用国家气象局天气预报接口

    原文:C#调用国家气象局天气预报接口 一.需求 最近,刚好项目中有天气预报查询功能的需求,要求录入城市名称,获取该城市今日天气信息及相关气象生活辅助信息等. 例如:查询北京市天气 结果为: 今日北京天 ...

  5. 【Jquery】jQuery获取URL參数的两种方法

    jQuery获取URL參数的关键是获取到URL,然后对URL进行过滤处理,取出參数. location.href是取得URL.location.search是取得URL"?"之后的 ...

  6. 嵌入式开发之davinci--- 8127 中camer 和 capture link 的区别

    (1)camera link (2)capture link (3)两者区别 (1)camera link 走的是isp iss link采集的得到的数据,适用于ipnc 框架 (2)capture ...

  7. [办公自动化]EXCEL不大,但是保存很慢

    今天同事有一个excel文件.office 2007格式的. 折腾了半天.按照以往的经验,定位-对象,应该可以删除. 后来在“编辑”窗格的“查找和选择”里面,单击“选择窗格“.可以看到很多”pictu ...

  8. linux epoll的实现原理

    1 linux的poll操作 linux文件的poll操作有两个主要目的:第一,主动查看该文件上是否有读写事件:第二,提供操作waitqueue的接口给epoll等上层接口使用,比如epoll可以通过 ...

  9. debian包之间的关系

    1 debian包之间存在两大类关系 第一,依赖 第二,冲突 2 依赖类关系 2.1 depends 2.2 pre-depends 2.3 recommends 2.4 suggests 2.5 e ...

  10. ABAP JSON

    1 DATA: lv_em TYPE string, lv_em1 TYPE string. DATA: lt_but021 TYPE TABLE OF but021, lt_but0211 TYPE ...