IJCAI 2019 Analysis
IJCAI 2019 Analysis
检索不到论文的关键词:retrofitting
word embedding
Getting in Shape: Word Embedding SubSpaces
减肥:词嵌入的子空间
Many tasks in natural language processing require the alignment of word embeddings.
自然语言处理中的许多任务都需要词嵌入的对齐。
Embedding alignment relies on the geometric properties of the manifold of word vectors.
嵌入对齐依赖于字向量流形的几何特性。
This paper focuses on supervised linear alignment and studies the relationship between the shape of the target embedding.
本文着重研究了有监督线性对齐和目标嵌入形状之间的关系。
We assess the performance of aligned word vectors on semantic similarity tasks and find that the isotropy of the target embedding is critical to the alignment.
我们评估了词向量对齐在语义相似度任务中的性能,发现目标嵌入的各向同性对对齐至关重要。
Furthermore, aligning with an isotropic noise can deliver satisfactory results.
此外,与各向同性噪声对准可以产生令人满意的结果。
We provide a theoretical framework and guarantees which aid in the understanding of empirical results.
我们提供了一个理论框架和保证,有助于理解经验结果。
The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning
学生已成为大师:基于师生模型的词嵌入蒸馏与集成学习
Recent advances in deep learning have facilitated the demand of neural models for real applications.
深度学习的最新进展促进了神经模型对实际应用的需求。
In practice, these applications often need to be deployed with limited resources while keeping high accuracy.
在实践中,这些应用程序通常需要以有限的资源部署,同时保持高精度。
This paper touches the core of neural models in NLP, word embeddings, and presents an embedding distillation framework that remarkably reduces the dimension of word embeddings without compromising accuracy.
本文探讨了神经网络模型在NLP中的核心——字嵌入,提出了一种嵌入蒸馏框架,在不影响精度的前提下,显著减小了字嵌入的维数。
A new distillation ensemble approach is also proposed that trains a high-efficient student model using multiple teacher models.
提出了一种新的蒸馏集成方法,利用多教师模型训练高效的学生模型。
In our approach, the teacher models play roles only during training such that the student model operates on its own without getting supports from the teacher models during decoding, which makes it run as fast and light as any single model.
在我们的方法中,教师模型只在培训过程中发挥作用,这样学生模型就可以独立运行,而在解码过程中没有得到教师模型的支持,这使得它运行的速度和重量与任何单个模型一样快。
All models are evaluated on seven document classification datasets and show significant advantage over the teacher models for most cases.
所有模型都在七个文档分类数据集上进行评估,并且在大多数情况下都显示出比教师模型更大的优势。
Our analysis depicts insightful transformation of word embeddings from distillation and suggests a future direction to ensemble approaches using neural models.
我们的分析描述了单词嵌入从蒸馏到集成的深刻转变,并提出了使用神经模型的集成方法的未来方向。
word vector
A Latent Variable Model for Learning Distributional Relation Vectors
一种学习分布关系向量的隐变量模型
Recently a number of unsupervised approaches have been proposed for learning vectors that capture the relationship between two words.
近年来,一些无监督的方法被提出,用来学习向量捕捉两个词之间的关系。
Inspired by word embedding models, these approaches rely on co-occurrence statistics that are obtained from sentences in which the two target words appear.
受到词嵌入模型的启发,这些方法依赖于从两个目标词出现的句子中获得的共现统计数据。
However, the number of such sentences is often quite small, and most of the words that occur in them are not relevant for characterizing the considered relationship.
然而,这种句子的数量往往很小,其中出现的大多数单词与描述所考虑的关系无关。
As a result, standard co-occurrence statistics typically lead to noisy relation vectors.
因此,标准共现统计通常会导致噪声关系向量。
To address this issue, we propose a latent variable model that aims to explicitly determine what words from the given sentences best characterize the relationship between the two target words.
为了解决这一问题,我们提出了一个隐变量模型,该模型旨在明确地确定来自给定句子的哪些词最能描述两个目标词之间的关系。
Relation vectors then correspond to the parameters of a simple unigram language model which is estimated from these words.
然后,关系向量对应于一个简单的一元语言模型的参数,该模型是根据这些词估计的。
word representation
Refining Word Representations by Manifold Learning
用流形学习提炼词的表征
Pre-trained distributed word representations have been proven useful in various natural language processing (NLP) tasks.
预训练的分布式单词表示已经被证明在各种自然语言处理(NLP)任务中有用。
However, the effect of words’ geometric structure on word representations has not been carefully studied yet.
然而,词汇的几何结构对词汇表征的影响还没有得到认真研究。
The existing word representations methods underestimate the words whose distances are close in the Euclidean space, while overestimating words with a much greater distance.
现有的词表示方法低估了欧几里得空间中距离较近的词,而高估了距离较大的词。
In this paper, we propose a word vector refinement model to correct the pre-trained word embedding, which brings the similarity of words in Euclidean space closer to word semantics by using manifold learning.
本文提出了一个词向量精化模型来修正预先训练好的嵌入词,利用流形学习使欧几里得空间中的词相似性更接近于词的语义。
This approach is theoretically founded in the metric recovery paradigm.
这种方法理论上建立在度量恢复范式中。
Our word representations have been evaluated on a variety of lexical-level intrinsic tasks (semantic relatedness, semantic similarity) and the experimental results show that the proposed model outperforms several popular word representations approaches.
我们对各种词汇层次的内在任务(语义关联性、语义相似度)进行了词汇表征评估,实验结果表明,该模型优于几种常用的词汇表征方法。
IJCAI 2019 Analysis的更多相关文章
- 阿里云安全研究成果入选人工智能顶级会议 IJCAI 2019, 业界首次用AI解决又一难题!
8月10日至8月16日,国际人工智能组织联合会议IJCAI 2019(International Joint Conference on Artificial Intelligence 2019)在中 ...
- 2019年度【计算机视觉&机器学习&人工智能】国际重要会议汇总
简介 每年全世界都会举办很多计算机视觉(Computer Vision,CV). 机器学习(Machine Learning,ML).人工智能(Artificial Intelligence ,AI) ...
- zz【清华NLP】图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐
[清华NLP]图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐 图神经网络研究成为当前深度学习领域的热点.最近,清华大学NLP课题组Jie Zhou, Ganqu Cui, Zhengy ...
- Awesome Knowledge-Distillation
Awesome Knowledge-Distillation 2019-11-26 19:02:16 Source: https://github.com/FLHonker/Awesome-Knowl ...
- 揭秘阿里云WAF背后神秘的AI智能防御体系
背景 应用安全领域,各类攻击长久以来都危害着互联网上的应用,在web应用安全风险中,各类注入.跨站等攻击仍然占据着较前的位置.WAF(Web应用防火墙)正是为防御和阻断这类攻击而存在,也正是这些针对W ...
- 深度兴趣网络DIN-SIEN-DSIN
看看阿里如何在淘宝做推荐,实现"一人千物千面"的用户多样化兴趣推荐,首先总结下DIN.DIEN.DSIN: 传统深度学习在推荐就是稀疏到embedding编码,变成稠密向量,喂给N ...
- 论文解读(GraphDA)《Data Augmentation for Deep Graph Learning: A Survey》
论文信息 论文标题:Data Augmentation for Deep Graph Learning: A Survey论文作者:Kaize Ding, Zhe Xu, Hanghang Tong, ...
- 知识图谱实体对齐1:基于平移(translation)的方法
1 导引 在知识图谱领域,最重要的任务之一就是实体对齐 [1](entity alignment, EA).实体对齐旨在从不同的知识图谱中识别出表示同一个现实对象的实体.如下图所示,知识图谱\(\ma ...
- Relation-Shape Convolutional Neural Network for Point Cloud Analysis(CVPR 2019)
代码:https://github.com/Yochengliu/Relation-Shape-CNN 文章:https://arxiv.org/abs/1904.07601 作者直播:https:/ ...
随机推荐
- Java高并发程序设计学习笔记(二):多线程基础
转自:https://blog.csdn.net/dataiyangu/article/details/86226835# 什么是线程?线程的基本操作线程的基本操作新建线程调用run的一种方式调用ru ...
- python异步IO编程(二)
python异步IO编程(二) 目录 开门见山 Async IO设计模式 事件循环 asyncio 中的其他顶层函数 开门见山 下面我们用两个简单的例子来让你对异步IO有所了解 import asyn ...
- java多线程ExecutorService
1.new Thread的弊端 执行一个异步任务你还只是如下new Thread吗? new Thread(new Runnable() { @Override public void run() { ...
- Mysql命令行添加用户
创建用户: 命令: ? 1 CREATE USER 'username'@'host' IDENTIFIED BY 'password'; 说明:username – 你将创建的用户名, host – ...
- linux命令详解——lsof
lsof全名list opened files,也就是列举系统中已经被打开的文件.我们都知道,linux环境中,任何事物都是文件, 设备是文件,目录是文件,甚至sockets也是文件.所以,用好lso ...
- 韦东山嵌入式Linux学习笔记08--中断体系结构
中断是什么? 举个栗子, 系统怎么知道你什么时候插入鼠标这个设备? 可以有两种处理方式: 1. 查询方式: 轮询去检测是否有设备插入; 2. 中断的方式 当鼠标插入这个事件发生时, 置位某个寄存器,告 ...
- MP4 ISO基础媒体文件格式 摘要 1
目录 Object-structured File Organization 1 File Type Box (ftyp) Box Structures File Structure and gene ...
- double to long
obj to double obj to long instance of Bigdecimal public static void main(String[] args) throws Parse ...
- dcoker_ubuntu中安装python2.7
1.apt-get update 2.apt-get install python2.7 或 1.sudo apt-get update 2.sudo apt-get install python2. ...
- DNS信息探测
前面学习一下DNS域名解析原理及过程,今天我们学习下DNS域名信息的探测 本章主要目标是从各个角度搜集测试目标的基本信息,包括搜集信息的途径.各种工具的使用方法,以及简单的示例. 0x00 DNS信息 ...