METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments

Satanjeev Banerjee   Alon Lavie 

Language Technologies Institute  

Carnegie Mellon University  

Pittsburgh, PA 15213  

banerjee+@cs.cmu.edu  alavie@cs.cmu.edu

Important Snippets:

1. In  order  to  be  both  effective  and  useful,  an automatic metric for MT evaluation has to satisfy several basic criteria.  The primary and most intuitive requirement is that the metric have very high correlation with
quantified human notions of MT quality.  Furthermore, a good metric should be as sensitive as possible to differences in MT quality between  different  systems,  and  between  different versions of the same system.  The metric should be 

consistent  (same  MT  system  on  similar  texts should produce similar scores), reliable (MT systems that score similarly can be trusted to perform similarly) and general (applicable to different MT tasks in a wide range of domains and scenarios).  Needless
to say, satisfying all of the above criteria is  extremely  difficult,  and  all  of  the metrics  that have been proposed so far fall short of adequately addressing  most  if  not  all  of  these requirements.

2. It  is  based  on  an explicit word-to-word  matching  between  the  MT  output being evaluated and one or more reference translations.    Our  current  matching  supports  not  only matching  between  words that are  identical
in the two  strings  being  compared,  but  can  also  match words  that  are  simple  morphological  variants  of each other

3. Each possible matching is scored based on a combination of several features.  These  currently  include  uni-gram-precision,  uni-gram-recall, and a direct measure of how out-of-order the words of the MT output are with respect to
the reference.

4.Furthermore, our results demonstrated that recall plays a more important role than precision  in  obtaining  high-levels  of  correlation  with human judgments.

5.BLEU does not take recall into account directly.

6.BLEU  does  not  use  recall  because  the notion of recall is unclear when matching simultaneously  against  a  set  of  reference  translations (rather than a single reference).  To compensate for recall, BLEU uses a Brevity
Penalty, which penalizes translations for being “too short”.

7.BLEU  and  NIST  suffer  from  several  weaknesses:

>The Lack of Recall

>Use  of Higher Order  N-grams

>Lack  of  Explicit  Word-matching  Between Translation and Reference

>Use  of  Geometric  Averaging  of  N-grams

8.METEOR was designed to explicitly address the weaknesses in BLEU identified above.  It evaluates a  translation  by  computing  a  score  based  on  explicit  word-to-word  matches  between  the  translation and a reference
translation. If more than one reference translation is available, the given translation  is  scored  against  each  reference  independently,  and  the  best  score  is  reported.

9.Given a pair of translations to be compared (a system  translation  and  a  reference  translation), METEOR  creates  an alignment between  the  two strings. We define an alignment as a mapping be-tween unigrams, such that
every unigram in each string  maps  to  zero  or  one  unigram  in  the  other string, and to no unigrams in the same string.

10.This  alignment  is  incrementally  produced through a series of stages, each stage consisting of  two distinct phases.

11.In the first phase an external module lists all the possible  unigram  mappings  between  the  two strings.

12.Different modules map unigrams based  on  different  criteria.  The  “exact”  module maps  two  unigrams  if  they  are  exactly  the  same (e.g.  “computers”  maps  to  “computers”  but  not “computer”). The “porter stem”
module maps two unigrams  if  they  are  the  same after they  are stemmed  using  the  Porter  stemmer  (e.g.:  “com-puters”  maps  to  both  “computers”  and  to  “com-puter”).  The  “WN  synonymy”  module  maps  two unigrams if they are synonyms of each
other.

13.In  the  second  phase  of  each  stage,  the  largest subset of these unigram mappings is selected such 

that  the  resulting  set  constitutes  an alignment as defined above

14. METEOR selects that set that has the least number of unigram mapping crosses.

15.By default the first stage uses the “exact” mapping  module,  the  second  the  “porter  stem” module and the third the “WN synonymy” module.

16. unigram precision (P)

unigram  recall  (R)

Fmean by combining the precision and recall via a harmonic-mean

watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvaWN0MjAxNA==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast" alt="">

To  take  into  account  longer matches, METEOR computes a penalty for a given alignment as follows.

chunks such that  the  uni-grams  in  each  chunk  are  in  adjacent  positions  in the system translation, and are also mapped to uni-grams that are in adjacent positions in the reference translation.

Conclusion: METEOR prefer recall to precision while BLEU is converse.Meanwhile, it incorporates many information.

版权声明:本文博客原创文章,博客,未经同意,不得转载。

[文学阅读] METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments的更多相关文章

  1. (zhuan) Recurrent Neural Network

    Recurrent Neural Network 2016年07月01日  Deep learning  Deep learning 字数:24235   this blog from: http:/ ...

  2. Paper Reading - Learning to Evaluate Image Captioning ( CVPR 2018 ) ★

    Link of the Paper: https://arxiv.org/abs/1806.06422 Innovations: The authors propose a novel learnin ...

  3. 《30天学习30种新技术》-Day 15:Meteor —— 从零开始创建一个 Web 应用

    目录:https://segmentfault.com/a/1190000000349384 原文: https://segmentfault.com/a/1190000000361440 到目前为止 ...

  4. 读书笔记——莫提默·J.艾德勒&查尔斯·范多伦(美)《如何阅读一本书》

    第一篇 阅读的层次 第一章 阅读的活力与艺术 阅读的目标:娱乐.获得资讯.增进理解力这本书是为那些想把读书的主要目的当作是增进理解能力的人而写.何谓阅读艺术?这是一个凭借着头脑运作,除了玩味读物中的一 ...

  5. 如何阅读一本书——分析阅读Pre

    如何阅读一本书--分析阅读Pre 前情介绍 作者: 莫提默.艾德勒 查尔斯.范多伦 初版:1940年,一出版就是全美畅销书榜首一年多.钢铁侠Elon.Musk学过. 需要注意的句子: 成功的阅读牵涉到 ...

  6. BLEU (Bilingual Evaluation Understudy)

    什么是BLEU? BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the quality of text w ...

  7. 机器翻译质量评测算法-BLEU

    机器翻译领域常使用BLEU对翻译质量进行测试评测.我们可以先看wiki上对BLEU的定义. BLEU (Bilingual Evaluation Understudy) is an algorithm ...

  8. cvpr2015papers

    @http://www-cs-faculty.stanford.edu/people/karpathy/cvpr2015papers/ CVPR 2015 papers (in nicer forma ...

  9. {ICIP2014}{收录论文列表}

    This article come from HEREARS-L1: Learning Tuesday 10:30–12:30; Oral Session; Room: Leonard de Vinc ...

随机推荐

  1. [Android学习笔记]View的measure过程学习

    View从创建到显示到屏幕需要经历几个过程: measure -> layout -> draw measure过程:计算view所占屏幕大小layout过程:设置view在屏幕的位置dr ...

  2. LINUX编程学习笔记(十三) 遍历目录的两种方法

    1 默认情况下  实际用户和有效用户是一样的 实际用户:执行用户   有效用户:权限用户 getuid()  实际用户 geteuid() 有效用户 chmod u+s 之后 ,其他人执行文件时,实际 ...

  3. ubuntu安装软件的方式

    ubuntu安装软件的方式: 通常的我们能够在ubuntu软件中心和新立得软件包管理器找到自己想要的软件,直接选择就能够自己主动下载并安装到电脑中,不想要的时候随时能够再从那里面卸载.这是第一种方法, ...

  4. TCP/IP详细解释--TCP/IP可靠的原则 推拉窗 拥塞窗口

    TCP和UDP在同一水平---传输层.但TCP和UDP最不一样的地方.TCP它提供了一个可靠的数据传输服务,TCP是面向连接的,那.使用TCP两台主机通过第一通信"拨打电话"这个过 ...

  5. Hibernate学习之createSQLQuery与createQuery的区别及使用

    hibernate中createQuery与createSQLQuery:前者用的hql语句进行查询,后者可以用sql语句查询,前者以hibernate生成的Bean为对象装入list返回,后者则是以 ...

  6. [置顶] CentOS release 5.4 (Final)重置root密码(图文)

  7. follow through

    follow through是什么意思_follow through的翻译_音标_读音_用法_例句 - 必应 Bing Dictionary Web Images Videos Maps News D ...

  8. 单点更新线段树 RMQ

    D. Xenia and Bit Operations time limit per test 2 seconds memory limit per test 256 megabytes input ...

  9. linux中怎样设置DHCP

    linux怎样设置DHCP 环境:RH linux 9.0 使用linux下经常使用的dhcpd包. 最新版本号 dhcp3.0.5 下载地址: 下载 1.安装: 先拷贝dhcp-3.0.5.tar. ...

  10. cocos2d-x 旅程開始--(实现瓦片地图中的碰撞检測)

    转眼隔了一天了,昨天搞了整整一下午加一晚上,楞是没搞定小坦克跟砖头的碰撞检測,带着个问题睡觉甚是难受啊!还好今天弄成功了.只是感觉程序不怎么稳定啊.并且发现自己写的东西让我重写一遍的话我肯定写不出来. ...