https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors#Graphs

A               {\displaystyle A}   ,它的特征向量(eigenvector,也译固有向量本征向量)                     v               {\displaystyle v}   经过这个线性变换[1]之后,得到的新向量仍然与原来的                     v               {\displaystyle v}   保持在同一条直线上,但其长度或方向也许会改变。即

A               {\displaystyle A}   ,它的特征向量(eigenvector,也译固有向量本征向量)                     v               {\displaystyle v}   经过这个线性变换[1]之后,得到的新向量仍然与原来的                     v               {\displaystyle v}   保持在同一条直线上,但其长度或方向也许会改变。即

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that does not change its direction when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V that is not the zero vector, then v is an eigenvector of T if T(v) is a scalar multiple of v. This condition can be written as the equation

                    T         (                   v                 )         =         λ                   v                 ,               {\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,}  

where λ is a scalar in the field F, known as the eigenvalue, characteristic value, or characteristic root associated with the eigenvector v.

If the vector space V is finite-dimensional, then the linear transformation T can be represented as a square matrix A, and the vector v by a column vector, rendering the above mapping as a matrix multiplication on the left hand side and a scaling of the column vector on the right hand side in the equation

                    A                   v                 =         λ                   v                 .               {\displaystyle A\mathbf {v} =\lambda \mathbf {v} .}  

There is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations.[1][2]

Geometrically an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction that is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed.[3]

                    A         v         =         λ         v               {\displaystyle Av=\lambda v}  

λ               {\displaystyle \lambda }   标量,即特征向量的长度在该线性变换下缩放的比例,称                     λ               {\displaystyle \lambda }   为其特征值(本征值)。如果特征值为正,则表示                     v               {\displaystyle v}   在经过线性变换的作用后方向也不变;如果特征值为负,说明方向会反转;如果特征值为0,则是表示缩回零点。但无论怎样,仍在同一条直线上。

                    A         v         =         λ         v               {\displaystyle Av=\lambda v}  

λ               {\displaystyle \lambda }   标量,即特征向量的长度在该线性变换下缩放的比例,称                     λ               {\displaystyle \lambda }   为其特征值(本征值)。如果特征值为正,则表示                     v               {\displaystyle v}   在经过线性变换的作用后方向也不变;如果特征值为负,说明方向会反转;如果特征值为0,则是表示缩回零点。但无论怎样,仍在同一条直线上。

特征向量-Eigenvalues_and_eigenvectors#Graphs的更多相关文章

  1. 特征向量-Eigenvalues_and_eigenvectors#Graphs 线性变换

    总结: 1.线性变换运算封闭,加法和乘法 2.特征向量经过线性变换后方向不变 https://en.wikipedia.org/wiki/Linear_map Examples of linear t ...

  2. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

    Defferrard, Michaël, Xavier Bresson, and Pierre Vandergheynst. "Convolutional neural networks o ...

  3. 论文解读二代GCN《Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering》

    Paper Information Title:Convolutional Neural Networks on Graphs with Fast Localized Spectral Filteri ...

  4. 论文解读《The Emerging Field of Signal Processing on Graphs》

    感悟 看完图卷积一代.二代,深感图卷积的强大,刚开始接触图卷积的时候完全不懂为什么要使用拉普拉斯矩阵( $L=D-W$),主要是其背后的物理意义.通过借鉴前辈们的论文.博客.评论逐渐对图卷积有了一定的 ...

  5. 论文解读(AutoSSL)《Automated Self-Supervised Learning for Graphs》

    论文信息 论文标题:Automated Self-Supervised Learning for Graphs论文作者:Wei Jin, Xiaorui Liu, Xiangyu Zhao, Yao ...

  6. 论文解读(MGAE)《MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs》

    论文信息 论文标题:MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs论文作者:Qiaoyu Tan, Ninghao L ...

  7. 论文阅读 Inductive Representation Learning on Temporal Graphs

    12 Inductive Representation Learning on Temporal Graphs link:https://arxiv.org/abs/2002.07962 本文提出了时 ...

  8. 知识图谱顶刊综述 - (2021年4月) A Survey on Knowledge Graphs: Representation, Acquisition, and Applications

    知识图谱综述(2021.4) 论文地址:A Survey on Knowledge Graphs: Representation, Acquisition, and Applications 目录 知 ...

  9. PCA 协方差矩阵特征向量的计算

    人脸识别中矩阵的维数n>>样本个数m. 计算矩阵A的主成分,根据PCA的原理,就是计算A的协方差矩阵A'A的特征值和特征向量,但是A'A有可能比较大,所以根据A'A的大小,可以计算AA'或 ...

随机推荐

  1. html5 canvas 标签

    <canvas id="board" width="500" height="400"></canvas> < ...

  2. 北京网络赛G BOXES 状态压缩+有序BFS+高维数组判重

    #include <bits/stdc++.h> using namespace std; ]; ][]; ][][]; ][][][]; ][][][][]; ][][][][][]; ...

  3. vim跳到文件头和文末结尾

    gg           : 跳转到文件头 Shift+g   : 跳转到文件末尾

  4. SU suspike命令学习

    用默认参数生成的数据如下: 用别的软件打开数据查看, 依次查看4个脉冲的位置(第几道.出现的时间), 接下来我们做一些小练习,比如说在同一道上放两个脉冲 还可以增加道数,让脉冲位于不同地方,

  5. Drainage Ditches

    Drainage Ditches Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others) ...

  6. cocos2d 创建精灵图

    // 在init这个函数当中做一些初始化的事情 bool HelloWorld::init() { ////////////////////////////// // 先构造父级对象 if ( !CC ...

  7. BZOJ3784 : 树上的路径

    树的点分治,在分治的时候将所有点到根的距离依次放入一个数组q中. 对于一棵子树里的点,合法的路径一定是q[L]..q[R]的某个数加上自己到重心的距离. 定义五元组(v,l,m,r,w),表示当前路径 ...

  8. velocity 判断 变量 是否不是空或empty

    原先的 #if($mobile) 这种写法是不准确的 ,请换成 "$!{ mobile}"!="" 说明 :    #if($mobile)   这种写法 只能 ...

  9. POJ 3580 (伸展树)

    题目链接: http://poj.org/problem?id=3580 题目大意:对一个序列进行以下六种操作.输出MIN操作的结果. 解题思路: 六个操作,完美诠释了伸展树有多么吊.注意,默认使用L ...

  10. CocoaPods 安装的第三方删除

    CocoaPods 第三方删除 我们使用CocoaPods非常高效地将一些第三方类库导入到我们的项目中,难免会出现一些错误,这时应怎么删除它呢?以下方法会帮你解决这个问题 打开Build Phases ...