ACL 2019 分析

word embedding

22篇!

Towards Unsupervised Text Classification Leveraging Experts and Word Embeddings

Zied Haj-Yahia, Adrien Sieg and Léa A. Deleris

A Resource-Free Evaluation Metric for Cross-Lingual Word Embeddings Based on Graph Modularity

Yoshinari Fujinuma, Jordan Boyd-Graber and Michael J. Paul

How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions

Goran Glavaš, Robert Litschko, Sebastian Ruder and Ivan Vulić

Diachronic Sense Modeling with Deep Contextualized Word Embeddings: An Ecological View

Renfen Hu, Shen Li and Shichen Liang

Understanding Undesirable Word Embedding Associations

Kawin Ethayarajh, David Duvenaud and Graeme Hirst

Shared-Private Bilingual Word Embeddings for Neural Machine Translation

Xuebo Liu, Derek F. Wong, Yang Liu, Lidia S. Chao, Tong Xiao and Jingbo Zhu

Unsupervised Bilingual Word Embedding Agreement for Unsupervised Neural Machine Translation

Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita and Tiejun Zhao

Gender-preserving Debiasing for Pre-trained Word Embeddings

Masahiro Kaneko and Danushka Bollegala

Relational Word Embeddings

Jose Camacho-Collados, Luis Espinosa Anke and Steven Schockaert

Classification and Clustering of Arguments with Contextualized Word Embeddings

Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab and Iryna Gurevych

Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings

Yadollah Yaghoobzadeh, Katharina Kann, T. J. Hazen, Eneko Agirre and Hinrich Schütze

Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models

Takashi Wada, Tomoharu Iwata and Yuji Matsumoto

Neural Temporality Adaptation for Document Classification: Diachronic Word Embeddings and Domain Adaptation Models

Xiaolei Huang and Michael J. Paul

Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks

Shikhar Vashishth, Manik Bhandari, Prateek Yadav, Piyush Rai, Chiranjib Bhattacharyya and Partha Talukdar

Word2Sense: Sparse Interpretable Word Embeddings

Abhishek Panigrahi, Harsha Vardhan Simhadri and Chiranjib Bhattacharyya

Analyzing the limitations of cross-lingual word embedding mappings

Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa and Eneko Agirre

A Transparent Framework for Evaluating Unintended Demographic Bias in Word Embeddings

Chris Sweeney and Maryam Najafian

Unsupervised Joint Training of Bilingual Word Embeddings

Benjamin Marie and Atsushi Fujita

Exploring Numeracy in Word Embeddings

Aakanksha Naik, Abhilasha Ravichander, Carolyn Rose and Eduard Hovy

Analyzing and Mitigating Gender Bias in Languages with Grammatical Gender and Bilingual Word Embeddings

Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen and Kai-Wei Chang

On Dimensional Linguistic Properties of the Word Embedding Space

Vikas Raunak, Vaibhav Kumar, Vivek Gupta and Florian Metze

Towards incremental learning of word embeddings using context informativeness

Alexandre Kabbach, Kristina Gulordava and Aurélie Herbelot

Word Representation

Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation

Benjamin Heinzerling and Michael Strube

Word Vector

3 篇

Unraveling Antonym's Word Vectors through a Siamese-like Network

Mathias Etcheverry and Dina Wonsever

Word and Document Embedding with vMF-Mixture Priors on Context Word Vectors

Shoaib Jameel and Steven Schockaert

Generalized Tuning of Distributional Word Vectors for Monolingual and Cross-Lingual Lexical Entailment

Goran Glavaš and Ivan Vulić

Word

LSTMEmbed: Learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories

Ignacio Iacobacci and Roberto Navigli

Few-Shot Representation Learning for Out-Of-Vocabulary Words

Ziniu Hu, Ting Chen, Kai-Wei Chang and Yizhou Sun

Zero-shot Word Sense Disambiguation using Sense Definition Embeddings

Sawan Kumar, Sharmistha Jat, Karan Saxena and Partha Talukdar

Text Categorization by Learning Predominant Sense of Words as Auxiliary Task

Kazuya Shimura, Jiyi Li and Fumiyo Fukumoto

Learning to Discover, Ground and Use Words with Segmental Neural Language Models

Kazuya Kawakami, Chris Dyer and Phil Blunsom

Multiple Character Embeddings for Chinese Word Segmentation

Jianing Zhou, Jingkang Wang and Gongshen Liu

ACL 2019 分析的更多相关文章

  1. AAAI 2019 分析

    AAAI 2019 分析 Google Scholar 订阅 CoKE : Word Sense Induction Using Contextualized Knowledge Embeddings ...

  2. ICML 2019 分析

    ICML 2019 分析 Word Embeddings Understanding the Origins of Bias in Word Embeddings Popular word embed ...

  3. zz【清华NLP】图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐

    [清华NLP]图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐 图神经网络研究成为当前深度学习领域的热点.最近,清华大学NLP课题组Jie Zhou, Ganqu Cui, Zhengy ...

  4. 论文阅读 | Generating Fluent Adversarial Examples for Natural Languages

    Generating Fluent Adversarial Examples for Natural Languages   ACL 2019 为自然语言生成流畅的对抗样本 摘要 有效地构建自然语言处 ...

  5. BERT-MRC:统一化MRC框架提升NER任务效果

    原创作者 | 疯狂的Max 01 背景 命名实体识别任务分为嵌套命名实体识别(nested NER)和普通命名实体识别(flat NER),而序列标注模型只能给一个token标注一个标签,因此对于嵌套 ...

  6. Awesome Knowledge-Distillation

    Awesome Knowledge-Distillation 2019-11-26 19:02:16 Source: https://github.com/FLHonker/Awesome-Knowl ...

  7. 【转帖】Infor转型十年启示录:ERP套件厂商为什么要做云平台?

    Infor转型十年启示录:ERP套件厂商为什么要做云平台? https://www.tmtpost.com/4199274.html 好像浪潮国际 就是用的infor的ERP软件. 秦聪慧• 2019 ...

  8. 《构建之法》——GitHub和Visual Studio的基础使用

    git地址 https://github.com/microwangwei git用户名 microwangwei 学号后五位 62214 博客地址 https://www.cnblogs.com/w ...

  9. NLP中的对抗样本

    自然语言处理方面的研究在近几年取得了惊人的进步,深度神经网络模型已经取代了许多传统的方法.但是,当前提出的许多自然语言处理模型并不能够反映文本的多样特征.因此,许多研究者认为应该开辟新的研究方法,特别 ...

随机推荐

  1. C++11随机数的正确打开方式

    C++11随机数的正确打开方式 在C++11之前,现有的随机数函数都存在一个问题:在利用循环多次获取随机数时,如果程序运行过快或者使用了多线程等方法,srand((unsigned)time(null ...

  2. vue中如何开发插件

    1.vue中提供了install方法用来开发插件 官方:Vue.js 的插件应该有一个公开方法 install.这个方法的第一个参数是 Vue 构造器,第二个参数是一个可选的选项对象. 2.我的插件目 ...

  3. vue在组件中使用v-model

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  4. js 代码大全(各种方法、属性)

    事件源对象event.srcElement.tagNameevent.srcElement.type捕获释放event.srcElement.setCapture(); event.srcElemen ...

  5. Java Script语法

    JavaScript 语法 JavaScript 是一个程序语言.语法规则定义了语言结构. JavaScript 语法 JavaScript 是一个脚本语言. 它是一个轻量级,但功能强大的编程语言. ...

  6. C#基础知识之理解Cookie和Session机制

    会话(Session)跟踪是Web程序中常用的技术,用来跟踪用户的整个会话.常用的会话跟踪技术是Cookie与Session.Cookie通过在客户端记录信息确定用户身份,Session通过在服务器端 ...

  7. 「LCT」

    终于在多篇题解和我的个人超常发挥下抄完了lct的所有题,kx死了. 理解 在我看来,实际上lct的板子没有什么考的,更重要的可能是起到一个数据结构的维护作用实际上就是出题人想给你找点乐子. 前几道题都 ...

  8. SpringMVC初识

    1 SpringMVC的概述 Spring为展现层提供的基于MVC设计理念的优秀的web框架,是目前最主流的MVC框架之一. Spring3.0后面全面超过Struts2,成为了最优秀的MVC框架. ...

  9. ArrayList为什么是线程不安全的

    首先需要了解什么是线程安全:线程安全就是说多线程访问同一代码(对象.变量等),不会产生不确定的结果. 既然说ArrayList是线程不安全的,那么在多线程中操作一个ArrayList对象,则会出现不确 ...

  10. strtok的使用

    /* strtok函数的使用 */ #include <stdio.h> #include <stdlib.h> #include <string.h> // 函数 ...