IJCAI 2019 Analysis

检索不到论文的关键词:retrofitting

word embedding

Getting in Shape: Word Embedding SubSpaces

减肥:词嵌入的子空间

Many tasks in natural language processing require the alignment of word embeddings.

自然语言处理中的许多任务都需要词嵌入的对齐。

Embedding alignment relies on the geometric properties of the manifold of word vectors.

嵌入对齐依赖于字向量流形的几何特性。

This paper focuses on supervised linear alignment and studies the relationship between the shape of the target embedding.

本文着重研究了有监督线性对齐和目标嵌入形状之间的关系。

We assess the performance of aligned word vectors on semantic similarity tasks and find that the isotropy of the target embedding is critical to the alignment.

我们评估了词向量对齐在语义相似度任务中的性能,发现目标嵌入的各向同性对对齐至关重要。

Furthermore, aligning with an isotropic noise can deliver satisfactory results.

此外,与各向同性噪声对准可以产生令人满意的结果。

We provide a theoretical framework and guarantees which aid in the understanding of empirical results.

我们提供了一个理论框架和保证,有助于理解经验结果。

The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning

学生已成为大师:基于师生模型的词嵌入蒸馏与集成学习

Recent advances in deep learning have facilitated the demand of neural models for real applications.

深度学习的最新进展促进了神经模型对实际应用的需求。

In practice, these applications often need to be deployed with limited resources while keeping high accuracy.

在实践中,这些应用程序通常需要以有限的资源部署,同时保持高精度。

This paper touches the core of neural models in NLP, word embeddings, and presents an embedding distillation framework that remarkably reduces the dimension of word embeddings without compromising accuracy.

本文探讨了神经网络模型在NLP中的核心——字嵌入,提出了一种嵌入蒸馏框架,在不影响精度的前提下,显著减小了字嵌入的维数。

A new distillation ensemble approach is also proposed that trains a high-efficient student model using multiple teacher models.

提出了一种新的蒸馏集成方法,利用多教师模型训练高效的学生模型。

In our approach, the teacher models play roles only during training such that the student model operates on its own without getting supports from the teacher models during decoding, which makes it run as fast and light as any single model.

在我们的方法中,教师模型只在培训过程中发挥作用,这样学生模型就可以独立运行,而在解码过程中没有得到教师模型的支持,这使得它运行的速度和重量与任何单个模型一样快。

All models are evaluated on seven document classification datasets and show significant advantage over the teacher models for most cases.

所有模型都在七个文档分类数据集上进行评估,并且在大多数情况下都显示出比教师模型更大的优势。

Our analysis depicts insightful transformation of word embeddings from distillation and suggests a future direction to ensemble approaches using neural models.

我们的分析描述了单词嵌入从蒸馏到集成的深刻转变,并提出了使用神经模型的集成方法的未来方向。

word vector

A Latent Variable Model for Learning Distributional Relation Vectors

一种学习分布关系向量的隐变量模型

Recently a number of unsupervised approaches have been proposed for learning vectors that capture the relationship between two words.

近年来,一些无监督的方法被提出,用来学习向量捕捉两个词之间的关系。

Inspired by word embedding models, these approaches rely on co-occurrence statistics that are obtained from sentences in which the two target words appear.

受到词嵌入模型的启发,这些方法依赖于从两个目标词出现的句子中获得的共现统计数据。

However, the number of such sentences is often quite small, and most of the words that occur in them are not relevant for characterizing the considered relationship.

然而,这种句子的数量往往很小,其中出现的大多数单词与描述所考虑的关系无关。

As a result, standard co-occurrence statistics typically lead to noisy relation vectors.

因此,标准共现统计通常会导致噪声关系向量。

To address this issue, we propose a latent variable model that aims to explicitly determine what words from the given sentences best characterize the relationship between the two target words.

为了解决这一问题,我们提出了一个隐变量模型,该模型旨在明确地确定来自给定句子的哪些词最能描述两个目标词之间的关系。

Relation vectors then correspond to the parameters of a simple unigram language model which is estimated from these words.

然后,关系向量对应于一个简单的一元语言模型的参数,该模型是根据这些词估计的。

word representation

Refining Word Representations by Manifold Learning

用流形学习提炼词的表征

Pre-trained distributed word representations have been proven useful in various natural language processing (NLP) tasks.

预训练的分布式单词表示已经被证明在各种自然语言处理(NLP)任务中有用。

However, the effect of words’ geometric structure on word representations has not been carefully studied yet.

然而,词汇的几何结构对词汇表征的影响还没有得到认真研究。

The existing word representations methods underestimate the words whose distances are close in the Euclidean space, while overestimating words with a much greater distance.

现有的词表示方法低估了欧几里得空间中距离较近的词,而高估了距离较大的词。

In this paper, we propose a word vector refinement model to correct the pre-trained word embedding, which brings the similarity of words in Euclidean space closer to word semantics by using manifold learning.

本文提出了一个词向量精化模型来修正预先训练好的嵌入词,利用流形学习使欧几里得空间中的词相似性更接近于词的语义。

This approach is theoretically founded in the metric recovery paradigm.

这种方法理论上建立在度量恢复范式中。

Our word representations have been evaluated on a variety of lexical-level intrinsic tasks (semantic relatedness, semantic similarity) and the experimental results show that the proposed model outperforms several popular word representations approaches.

我们对各种词汇层次的内在任务(语义关联性、语义相似度)进行了词汇表征评估,实验结果表明,该模型优于几种常用的词汇表征方法。

IJCAI 2019 Analysis的更多相关文章

  1. 阿里云安全研究成果入选人工智能顶级会议 IJCAI 2019, 业界首次用AI解决又一难题!

    8月10日至8月16日,国际人工智能组织联合会议IJCAI 2019(International Joint Conference on Artificial Intelligence 2019)在中 ...

  2. 2019年度【计算机视觉&机器学习&人工智能】国际重要会议汇总

    简介 每年全世界都会举办很多计算机视觉(Computer Vision,CV). 机器学习(Machine Learning,ML).人工智能(Artificial Intelligence ,AI) ...

  3. zz【清华NLP】图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐

    [清华NLP]图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐 图神经网络研究成为当前深度学习领域的热点.最近,清华大学NLP课题组Jie Zhou, Ganqu Cui, Zhengy ...

  4. Awesome Knowledge-Distillation

    Awesome Knowledge-Distillation 2019-11-26 19:02:16 Source: https://github.com/FLHonker/Awesome-Knowl ...

  5. 揭秘阿里云WAF背后神秘的AI智能防御体系

    背景 应用安全领域,各类攻击长久以来都危害着互联网上的应用,在web应用安全风险中,各类注入.跨站等攻击仍然占据着较前的位置.WAF(Web应用防火墙)正是为防御和阻断这类攻击而存在,也正是这些针对W ...

  6. 深度兴趣网络DIN-SIEN-DSIN

    看看阿里如何在淘宝做推荐,实现"一人千物千面"的用户多样化兴趣推荐,首先总结下DIN.DIEN.DSIN: 传统深度学习在推荐就是稀疏到embedding编码,变成稠密向量,喂给N ...

  7. 论文解读(GraphDA)《Data Augmentation for Deep Graph Learning: A Survey》

    论文信息 论文标题:Data Augmentation for Deep Graph Learning: A Survey论文作者:Kaize Ding, Zhe Xu, Hanghang Tong, ...

  8. 知识图谱实体对齐1:基于平移(translation)的方法

    1 导引 在知识图谱领域,最重要的任务之一就是实体对齐 [1](entity alignment, EA).实体对齐旨在从不同的知识图谱中识别出表示同一个现实对象的实体.如下图所示,知识图谱\(\ma ...

  9. Relation-Shape Convolutional Neural Network for Point Cloud Analysis(CVPR 2019)

    代码:https://github.com/Yochengliu/Relation-Shape-CNN 文章:https://arxiv.org/abs/1904.07601 作者直播:https:/ ...

随机推荐

  1. 学习Linux的准备

    学习方式: 主动学习: 动手实践:40% 讲给别人:70% 被动学习: 听课:10% 笔记:20% 写博客的要求: 写博客是对某一方面知识的总结,输出:是知识的书面化的表达方式.写博客不同于写笔记,笔 ...

  2. PHP强制修改返回的状态码

    在最后的程序执行完毕之前,加入下列语句,即可实现所有的返回码都为200即使在服务器内部发生错误,会报500情况下只要加上register_shutdown_function函数的处理同样可以实现返回2 ...

  3. 测开常见面试题什么是redis

    企业中redis是必备的性能优化中间件,也是常见面试题,首先Redis是由意大利人Salvatore Sanfilippo(网名:antirez)开发的一款内存高速缓存数据库.Redis全称为:Rem ...

  4. PAT Basic 1013 数素数 (20 分)

    令 P​i​​ 表示第 i 个素数.现任给两个正整数 M≤N≤10​4​​,请输出 P​M​​ 到 P​N​​ 的所有素数. 输入格式: 输入在一行中给出 M 和 N,其间以空格分隔. 输出格式: 输 ...

  5. QT的DPI支持

    在main函数第一行加入: QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling); 鼠标不按下也响应移动事件: setMouseTra ...

  6. 关于@wraps(fn)

  7. POJ3254Corn Fields (状态压缩or插头DP)

    Description Farmer John has purchased a lush new rectangular pasture composed of M by N (1 ≤ M ≤ 12; ...

  8. Hihocoder1046K个串(线段树)(待解决)

    描述 兔子们在玩k个串的游戏.首先,它们拿出了一个长度为n的数字序列,选出其中的一个连续子串,然后统计其子串中所有数字之和(注意这里重复出现的数字只被统计一次). 兔子们想知道,在这个数字序列所有连续 ...

  9. 【BZOJ3143】【Luogu P3232】 [HNOI2013]游走 概率期望,图论

    期望\(DP\)入门题目. 关键思想:无向边的转移作为有向边考虑.其他的就是直接上全期望公式.由于这个题目不是有向无环图,所以需要高斯消元搞一搞. 设每个点的期望经过次数是\(g(x)\),那么有 \ ...

  10. DevExpress ASP.NET Bootstrap v19.1版本亮点:Editors控件

    行业领先的.NET界面控件DevExpress 正式发布了v19.1版本,本文将以系列文章的方式为大家介绍DevExpress ASP.NET Bootstrap Controls中Editors.G ...