Element-wise operations

An element-wise operation operates on corresponding elements between tensors.

Two tensors must have the same shape in order to perform element-wise operations on them.

Suppose we have the following two tensors(Both of these tensors are rank-2 tensors with a shape of 2 \(\times\) 2):

t1 = torch.tensor([
[1, 2],
[3, 4]
], dtype=torch.float32) t2 = torch.tensor([
[9, 8],
[7, 6]
], dtype=torch.float32)

The elements of the first axis are arrays and the elements of the second axis are numbers.

# Example of the first axis
> print(t1[0])
tensor([1., 2.]) # Example of the second axis
> print(t1[0][0])
tensor(1.)

Addition is an element-wise operation.

> t1 + t2
tensor([[10., 10.],
[10., 10.]])

In fact, all the arithmetic operations, add, subtract, multiply, and divide are element-wise operations. There are two ways we can do this:

  1. Using these symbolic operations:
> t + 2
tensor([[3., 4.],
[5., 6.]]) > t - 2
tensor([[-1., 0.],
[1., 2.]]) > t * 2
tensor([[2., 4.],
[6., 8.]]) > t / 2
tensor([[0.5000, 1.0000],
[1.5000, 2.0000]])
  1. Or equivalently, these built-in tensor methods:
> t.add(2)
tensor([[3., 4.],
[5., 6.]]) > t.sub(2)
tensor([[-1., 0.],
[1., 2.]]) > t.mul(2)
tensor([[2., 4.],
[6., 8.]]) > t.div(2)
tensor([[0.5000, 1.0000],
[1.5000, 2.0000]])

Broadcasting tensors

Broadcasting is the concept whose implementation allows us to add scalars to higher dimensional tensors.

We can see what the broadcasted scalar value looks like using the broadcast_to()Numpy function:

> np.broadcast_to(2, t.shape)
array([[2, 2],
[2, 2]])
//This means the scalar value is transformed into a rank-2 tensor just like t, and //just like that, the shapes match and the element-wise rule of having the same //shape is back in play.

Trickier example of broadcasting

t1 = torch.tensor([
[1, 1],
[1, 1]
], dtype=torch.float32) t2 = torch.tensor([2, 4], dtype=torch.float32)

Even through these two tensors have differing shapes, the element-wise operation is possible, and broadcasting is what makes the operation possible.

> np.broadcast_to(t2.numpy(), t1.shape)
array([[2., 4.],
[2., 4.]], dtype=float32) >t1 + t2
tensor([[3., 5.],
[3., 5.]])

When do we actually use broadcasting? We often need to use broadcasting when we are preprocessing our data, and especially during normalization routines.


Comparison operations are element-wise. For a given comparison operation between tensors, a new tensor of the same shape is returned with each element containing either a 0 or a 1.

> t = torch.tensor([
[0, 5, 0],
[6, 0, 7],
[0, 8, 0]
], dtype=torch.float32)

Let's check out some of the comparison operations.

> t.eq(0)
tensor([[1, 0, 1],
[0, 1, 0],
[1, 0, 1]], dtype=torch.uint8) > t.ge(0)
tensor([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]], dtype=torch.uint8) > t.gt(0)
tensor([[0, 1, 0],
[1, 0, 1],
[0, 1, 0]], dtype=torch.uint8) > t.lt(0)
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]], dtype=torch.uint8) > t.le(7)
tensor([[1, 1, 1],
[1, 1, 1],
[1, 0, 1]], dtype=torch.uint8)

Element-wise operations using functions

Here are some examples:

> t.abs()
tensor([[0., 5., 0.],
[6., 0., 7.],
[0., 8., 0.]]) > t.sqrt()
tensor([[0.0000, 2.2361, 0.0000],
[2.4495, 0.0000, 2.6458],
[0.0000, 2.8284, 0.0000]]) > t.neg()
tensor([[-0., -5., -0.],
[-6., -0., -7.],
[-0., -8., -0.]]) > t.neg().abs()
tensor([[0., 5., 0.],
[6., 0., 7.],
[0., 8., 0.]])

Element-wise operations的更多相关文章

  1. 向量的一种特殊乘法 element wise multiplication

    向量的一种特殊乘法 element wise multiplication 物体反射颜色的计算采用这样的模型: vec3 reflectionColor = objColor * lightColor ...

  2. [C2P1] Andrew Ng - Machine Learning

    About this Course Machine learning is the science of getting computers to act without being explicit ...

  3. TensorRT 3:更快的TensorFlow推理和Volta支持

    TensorRT 3:更快的TensorFlow推理和Volta支持 TensorRT 3: Faster TensorFlow Inference and Volta Support 英伟达Tens ...

  4. (转)A Beginner's Guide To Understanding Convolutional Neural Networks Part 2

    Adit Deshpande CS Undergrad at UCLA ('19) Blog About A Beginner's Guide To Understanding Convolution ...

  5. Understanding Convolution in Deep Learning

    Understanding Convolution in Deep Learning Convolution is probably the most important concept in dee ...

  6. Must Know Tips/Tricks in Deep Neural Networks

    Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)   Deep Neural Networks, especially C ...

  7. [Tensorflow] Cookbook - Neural Network

    In this chapter, we'll cover the following recipes: Implementing Operational Gates Working with Gate ...

  8. [Tensorflow] Cookbook - Object Classification based on CIFAR-10

    Convolutional Neural Networks (CNNs) are responsible for the major breakthroughs in image recognitio ...

  9. Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)

    http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html Deep Neural Networks, especially Conv ...

  10. [转]An Intuitive Explanation of Convolutional Neural Networks

    An Intuitive Explanation of Convolutional Neural Networks https://ujjwalkarn.me/2016/08/11/intuitive ...

随机推荐

  1. java的异常与记录日志

    今天在<java编程思想>一书中看到了异常与记录日志,发现学会将异常记录进日志中还是很有必要的,以下是书中的例子: import java.io.PrintWriter; import j ...

  2. Caused by: java.lang.IncompatibleClassChangeError: class org.springframework.scheduling.quartz.CronTriggerBean has interface org.quartz.CronTrigger as super class

    这是版本的问题: 解决办法有两种: 1.降低Quartz版本,降到1.X去. 2.升级Spring版本到3.1+,根据Spring的建议,将原来的**TriggerBean替换成**TriggerFa ...

  3. vmware下centos6.7网络配置

    使用NAT方式: 查看/etc/sysconfig/network-script/ 下面没有ifcfg-eth0 新建ifcfg-eth0,内容如下 DEVICE=eth0 BOOTPROTO=dhc ...

  4. Centos5.11 //IP/phpmyadmin 远程无法登入

    异地登入phpmyadmin时,会出现"You don't have permission to access /phpmyadmin/ on this server."这是因为配 ...

  5. [Javascript] Link to Other Objects through the JavaScript Prototype Chain

    Objects have the ability to use data and methods that other objects contain, as long as it lives on ...

  6. Linux VPS/server上用Crontab来实现VPS自己主动化

    VPS或者server上常常会须要VPS或者server上常常会须要定时备份数据.定时运行重新启动某个服务或定时运行某个程序等等,一般在Linux使用Crontab,Windows以下是用计划任务(W ...

  7. PADs 元器件PCB建库

    直接看图就好了,上图! 有几点需要记住的: 如果没有datasheet的情况下,与焊盘相比,阻焊大0.1mm,钢网小0.1mm.或者阻焊大0.05mm,钢网等大,具体要看引脚的间距. 焊盘太大,比如1 ...

  8. 多媒体开发之---h.264 rtsp网络流测试流

    rtsp://218.204.223.237:554/live/1/66251FC11353191F/e7ooqwcfbqjoo80j.sdp 珠海拱北

  9. 什么是cookie?session和cookie的区别?

    1.cookie数据存放在客户的浏览器上,session数据放在服务器上. 2.cookie不是很安全,别人可以分析存放在本地的COOKIE并进行COOKIE欺骗   考虑到安全应当使用session ...

  10. md5 js

    js-md5 - npm https://www.npmjs.com/package/js-md5 var rotateLeft = function(lValue, iShiftBits) { re ...