ACL 2019 分析

word embedding

22篇!

Towards Unsupervised Text Classification Leveraging Experts and Word Embeddings

Zied Haj-Yahia, Adrien Sieg and Léa A. Deleris

A Resource-Free Evaluation Metric for Cross-Lingual Word Embeddings Based on Graph Modularity

Yoshinari Fujinuma, Jordan Boyd-Graber and Michael J. Paul

How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions

Goran Glavaš, Robert Litschko, Sebastian Ruder and Ivan Vulić

Diachronic Sense Modeling with Deep Contextualized Word Embeddings: An Ecological View

Renfen Hu, Shen Li and Shichen Liang

Understanding Undesirable Word Embedding Associations

Kawin Ethayarajh, David Duvenaud and Graeme Hirst

Shared-Private Bilingual Word Embeddings for Neural Machine Translation

Xuebo Liu, Derek F. Wong, Yang Liu, Lidia S. Chao, Tong Xiao and Jingbo Zhu

Unsupervised Bilingual Word Embedding Agreement for Unsupervised Neural Machine Translation

Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita and Tiejun Zhao

Gender-preserving Debiasing for Pre-trained Word Embeddings

Masahiro Kaneko and Danushka Bollegala

Relational Word Embeddings

Jose Camacho-Collados, Luis Espinosa Anke and Steven Schockaert

Classification and Clustering of Arguments with Contextualized Word Embeddings

Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab and Iryna Gurevych

Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings

Yadollah Yaghoobzadeh, Katharina Kann, T. J. Hazen, Eneko Agirre and Hinrich Schütze

Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models

Takashi Wada, Tomoharu Iwata and Yuji Matsumoto

Neural Temporality Adaptation for Document Classification: Diachronic Word Embeddings and Domain Adaptation Models

Xiaolei Huang and Michael J. Paul

Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks

Shikhar Vashishth, Manik Bhandari, Prateek Yadav, Piyush Rai, Chiranjib Bhattacharyya and Partha Talukdar

Word2Sense: Sparse Interpretable Word Embeddings

Abhishek Panigrahi, Harsha Vardhan Simhadri and Chiranjib Bhattacharyya

Analyzing the limitations of cross-lingual word embedding mappings

Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa and Eneko Agirre

A Transparent Framework for Evaluating Unintended Demographic Bias in Word Embeddings

Chris Sweeney and Maryam Najafian

Unsupervised Joint Training of Bilingual Word Embeddings

Benjamin Marie and Atsushi Fujita

Exploring Numeracy in Word Embeddings

Aakanksha Naik, Abhilasha Ravichander, Carolyn Rose and Eduard Hovy

Analyzing and Mitigating Gender Bias in Languages with Grammatical Gender and Bilingual Word Embeddings

Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen and Kai-Wei Chang

On Dimensional Linguistic Properties of the Word Embedding Space

Vikas Raunak, Vaibhav Kumar, Vivek Gupta and Florian Metze

Towards incremental learning of word embeddings using context informativeness

Alexandre Kabbach, Kristina Gulordava and Aurélie Herbelot

Word Representation

Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation

Benjamin Heinzerling and Michael Strube

Word Vector

3 篇

Unraveling Antonym's Word Vectors through a Siamese-like Network

Mathias Etcheverry and Dina Wonsever

Word and Document Embedding with vMF-Mixture Priors on Context Word Vectors

Shoaib Jameel and Steven Schockaert

Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment

Goran Glavaš and Ivan Vulić

Word

LSTMEmbed: Learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories

Ignacio Iacobacci and Roberto Navigli

Few-Shot Representation Learning for Out-Of-Vocabulary Words

Ziniu Hu, Ting Chen, Kai-Wei Chang and Yizhou Sun

Zero-shot Word Sense Disambiguation using Sense Definition Embeddings

Sawan Kumar, Sharmistha Jat, Karan Saxena and Partha Talukdar

Text Categorization by Learning Predominant Sense of Words as Auxiliary Task

Kazuya Shimura, Jiyi Li and Fumiyo Fukumoto

Learning to Discover, Ground and Use Words with Segmental Neural Language Models

Kazuya Kawakami, Chris Dyer and Phil Blunsom

Multiple Character Embeddings for Chinese Word Segmentation

Jianing Zhou, Jingkang Wang and Gongshen Liu

ACL 2019 分析的更多相关文章

  1. AAAI 2019 分析

    AAAI 2019 分析 Google Scholar 订阅 CoKE : Word Sense Induction Using Contextualized Knowledge Embeddings ...

  2. ICML 2019 分析

    ICML 2019 分析 Word Embeddings Understanding the Origins of Bias in Word Embeddings Popular word embed ...

  3. zz【清华NLP】图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐

    [清华NLP]图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐 图神经网络研究成为当前深度学习领域的热点.最近,清华大学NLP课题组Jie Zhou, Ganqu Cui, Zhengy ...

  4. 论文阅读 | Generating Fluent Adversarial Examples for Natural Languages

    Generating Fluent Adversarial Examples for Natural Languages   ACL 2019 为自然语言生成流畅的对抗样本 摘要 有效地构建自然语言处 ...

  5. BERT-MRC:统一化MRC框架提升NER任务效果

    原创作者 | 疯狂的Max 01 背景 命名实体识别任务分为嵌套命名实体识别(nested NER)和普通命名实体识别(flat NER),而序列标注模型只能给一个token标注一个标签,因此对于嵌套 ...

  6. Awesome Knowledge-Distillation

    Awesome Knowledge-Distillation 2019-11-26 19:02:16 Source: https://github.com/FLHonker/Awesome-Knowl ...

  7. 【转帖】Infor转型十年启示录:ERP套件厂商为什么要做云平台?

    Infor转型十年启示录:ERP套件厂商为什么要做云平台? https://www.tmtpost.com/4199274.html 好像浪潮国际 就是用的infor的ERP软件. 秦聪慧• 2019 ...

  8. 《构建之法》——GitHub和Visual Studio的基础使用

    git地址 https://github.com/microwangwei git用户名 microwangwei 学号后五位 62214 博客地址 https://www.cnblogs.com/w ...

  9. NLP中的对抗样本

    自然语言处理方面的研究在近几年取得了惊人的进步,深度神经网络模型已经取代了许多传统的方法.但是,当前提出的许多自然语言处理模型并不能够反映文本的多样特征.因此,许多研究者认为应该开辟新的研究方法,特别 ...

随机推荐

  1. 【AST篇】教你如何编写 Eslint 插件

    前言 虽然现在已经有很多实用的 ESLint 插件了,但随着项目不断迭代发展,你可能会遇到已有 ESLint 插件不能满足现在团队开发的情况.这时候,你需要自己来创建一个 ESLint 插件. 本文我 ...

  2. ubuntu apache https设置

    上篇文章已经描述过怎么生成证书,点击这里,直接写怎么设置 1.apache加载ssl模块, # a2enmod ssl 2.启动ssl站点 #a2ensite default-ssl 3.加入监听端口 ...

  3. Linux查看及设置系统时区

    一.什么是时区呢? 关于时区的概念,其实初中地理课已经涉及,很多人都多少了解一些,可能只是细节搞不太清楚.为什么会将地球分为不同时区呢?因为地球总是自西向东自转,东边总比西边先看到太阳,东边的时间也总 ...

  4. puppet之模板和类

    puppet之模板和类 不同节点布置资源 vim /etc/puppet/manifests/site.pp 1 import "nodes/*.pp" 建立节点文件 mkdir ...

  5. psu补丁

    1.查看命令 su - oracle opatch lspatches su - grid opatch lspatches

  6. Transposed Convolution 反卷积

    Transposed convolutions也称作fractionally strided convolutions(本人比较喜欢这个称呼,比较直观),Upconvolution,deconvolu ...

  7. error: ‘ostream_iterator’ was not declared in this scope

    在代码中添加      #include <iterator>

  8. Java并发编程实战 第4章 对象的组合

    Java监视器模式 java监视器模式就是在将共享的数据封装在一个类里面,然后然后所有访问或者修改这些数据的方法都标注为synchronize. 车辆追踪模拟: 使用监视器模式: CarTracker ...

  9. ZROI 19.08.04模拟赛

    传送门 写在前面:为了保护正睿题目版权,这里不放题面,只写题解. "这应该是正睿OI历史上第一次差评破百的比赛." "这说明来正睿集训的人越来越多了." &qu ...

  10. ZROI 19.08.01 树上数据结构

    1.总览 LCT 链分治(树剖) 点/边分治 2.点分治 一棵树,点有\(0/1\),多次修改,询问最远的两个\(1\)距离. 建出点分树,每个子树用堆维护:①最远的\(1\)距离:②它的每个儿子的① ...