=======================================================

前段时间为实验室负责审理AAAI 2022的会议稿件,感觉这个审稿模板还是不错的,这里保存一下,以后审理其他期刊、会议的时候可以参考这个模板。

========================================

Thank you for your contribution to AAAI 2022.

Edit Review

Paper ID

Paper Title

Track

Main Track

REVIEW QUESTIONS

 1.  {Summary} Please briefly summarize the main claims/contributions of the paper in your own words. (Please do not include your evaluation of the paper here). * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

2.  {Novelty} How novel are the concepts, problems addressed, or methods introduced in the paper? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The main ideas of the paper are ground-breaking.
  • Good: The paper makes non-trivial advances over the current state-of-the-art.
  • Fair: The paper contributes some new ideas.
  • Poor: The main ideas of the paper are not novel or represent incremental advances.

3.  {Soundness} Is the paper technically sound?(visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta- reviewers)

  • Excellent: I am confident that the paper is technically sound, and I have carefully checked the details.
  • Good: The paper appears to be technically sound, but I have not carefully checked the details.
  • Fair: The paper has minor, easily fixable, technical flaws that do not impact the validity of the main results.
  • Poor: The paper has major technical flaws.

4.  {Impact} How do you rate the likely impact of the paper on the AI research community? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The paper is likely to have high impact across more than one subfield of AI.
  • Good: The paper is likely to have high impact within a subfield of AI OR moderate impact across more than one subfield of AI.
  • Fair: The paper is likely to have moderate impact within a subfield of AI.
  • Poor: The paper is likely to have minimal impact on AI.

5.  {Clarity} Is the paper well-organized and clearly written? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The paper is well-organized and clearly written.
  • Good: The paper is well organized but the presentation could be improved.
  • Fair: The paper is somewhat clear, but some important details are missing or unclear.
  • Poor: The paper is unclear and very hard to understand.

6.  {Evaluation} If applicable, are the main claims well supported by experiments? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The experimental evaluation is comprehensive and the results are compelling.
  • Good: The experimental evaluation is adequate, and the results convincingly support the main claims.
  • Fair: The experimental evaluation is weak: important baselines are missing, or the results do not adequately support the main claims.
  • Poor: The experimental evaluation is flawed or the results fail to adequately support the main claims.
  • Not applicable: The paper does not present an experimental evaluation (the main focus of the paper is theoretical).

7.  {Resources} If applicable, how would you rate the new resources (code, data sets) the paper contributes? (It might help to consult the paper’s reproducibility checklist) * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The shared resources are likely to have a broad impact on one or more sub-areas of AI.
  • Good: The shared resources are likely to be very useful to other AI researchers.
  • Fair: The shared resources are likely to be moderately useful to other AI researchers.
  • Poor: The shared resources are unlikely to be useful to other AI researchers.
  • Not applicable: For instance, the primary contributions of the paper are theoretical.

8.  {Reproducibility} Are the results (e.g., theorems, experimental results) in the paper easily reproducible? (It may help to consult the paper’s reproducibility checklist.) * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: key resources (e.g., proofs, code, data) are available and key details (e.g., proof sketches, experimental setup) are comprehensively described for competent researchers to confidently and easily reproduce the main results.
  • Good: key resources (e.g., proofs, code, data) are available and key details (e.g., proofs, experimental setup) are sufficiently well-described for competent researchers to confidently reproduce the main results.
  • Fair: key resources (e.g., proofs, code, data) are unavailable but key details (e.g., proof sketches, experimental setup) are sufficiently well-described for an expert to confidently reproduce the main results.
  • Poor: key details (e.g., proof sketches, experimental setup) are incomplete/unclear, or key resources (e.g., proofs, code, data) are unavailable.

9.  {Ethical Considerations} Does the paper adequately address the applicable ethical considerations, e.g., responsible data collection and use (e.g., informed consent, privacy), possible societal harm (e.g., exacerbating injustice or discrimination due to algorithmic bias), etc.? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The paper comprehensively addresses all of the applicable ethical considerations.
  • Good: The paper adequately addresses most, but not all, of the applicable ethical considerations.
  • Fair: The paper addresses some but not all of the applicable ethical considerations.
  • Poor: The paper fails to address most of the applicable ethical considerations.
  • Not Applicable: The paper does not have any ethical considerations to address.

10.  {Reasons to Accept} Please list the key strengths of the paper (explain and summarize your rationale for your evaluations with respect to questions 1-9 above). * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

11.  {Reasons to Reject} Please list the key weaknesses of the paper (explain and summarize your rationale for your evaluations with respect to questions 1-9 above). * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

{Questions for the Authors} Please provide questions that you would like the authors to answer during the author feedback period. Please number them. * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

13.  {Detailed Feedback for the Authors} Please provide other detailed, constructive, feedback to the authors. *

(visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

14.  (OVERALL EVALUATION) Please provide your overall evaluation of the paper, carefully weighing the reasons to accept and the reasons to reject the paper. Ideally, we should have:

No more than 25% of the submitted papers in (Accept + Strong Accept + Very Strong Accept + Award Quality) categories;

No more than 20% of the submitted papers in (Strong Accept + Very Strong Accept + Award Quality) categories;

No more than 10% of the submitted papers in (Very Strong Accept + Award Quality) categories;

No more than 1% of the submitted papers in the Award Quality category;

(visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta- reviewers)

  • Award quality: Technically flawless paper with groundbreaking impact on one or more areas of AI, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations.
  • Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI and excellent impact on multiple areas of AI, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
  • Strong Accept: Technically strong paper with, with novel ideas, excellent impact on at least one area of AI or high to excellent impact on multiple areas of AI, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
  • Accept: Technically solid paper, with high impact on at least one sub-area of AI or moderate to high impact on more than one area of AI, with good to excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
  • Weak Accept: Technically solid, moderate to high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
  • Borderline accept: Technically solid paper where reasons to accept, e.g., novelty, outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
  • Borderline reject: Technically solid paper where reasons to reject, e.g., lack of novelty, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
  • Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility, incompletely addressed ethical considerations.
  • Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility, mostly unaddressed ethical considerations.
  • Very Strong Reject: For instance, a paper with trivial results, limited novelty, poor impact, or unaddressed ethical considerations.

15.  (CONFIDENCE) How confident are you in your evaluation? * (visible to other reviewers, visible to meta-reviewers)

  • Very confident. I have checked all points of the paper carefully. I am certain I did not miss any aspects that could otherwise have impacted my evaluation.
  • Quite confident. I tried to check the important points carefully. It is unlikely, though conceivable, that I missed some aspects that could otherwise have impacted my evaluation.
  • Somewhat confident, but there's a chance I missed some aspects. I did not carefully check some of the details, e.g., novelty, proof of a theorem, experimental design, or statistical validity of conclusions.
  • Not very confident. I am able to defend my evaluation of some aspects of the paper, but it is quite likely that I missed or did not understand some key details, or can't be sure about the novelty of the work.
  • Not confident. My evaluation is an educated guess.

16.  {Confidence-Justification} Please provide a justification for your confidence (only visible to SPC, AC, and Program Chairs). * (visible to meta-reviewers)

8000 characters left

17.  (EXPERTISE) How well does this paper align with your expertise? * (visible to other reviewers, visible to meta- reviewers)

  • Expert: This paper is within my current core research focus and I am deeply knowledgeable about all of the topics covered by the paper.
  • Very Knowledgeable: This paper significantly overlaps with my current work and I am very knowledgeable about most of the topics covered by the paper.
  • Knowledgeable: This paper has some overlap with my current work. My recent work was focused on closely related topics and I am knowledgeable about most of the topics covered by the paper.
  • Mostly Knowledgeable: This paper has little overlap with my current work. My past work was focused on related topics and I am knowledgeable or somewhat knowledgeable about most of the topics covered by the paper.
  • Somewhat Knowledgeable: This paper has little overlap with my current work. I am somewhat knowledgeable about some of the topics covered by the paper.
  • Not Knowledgeable: I have little knowledge about most of the topics covered by the paper.

18.  {Expertise-Justification} Please provide a justification for your expertise (only visible to SPC, AC, and Program Chairs). * (visible to meta-reviewers)

8000 characters left

19.  Confidential comments to SPC, AC, and Program Chairs (visible to meta-reviewers)

20.  I acknowledge that I have read the author's rebuttal and made whatever changes to my review where necessary.

 (visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

 

=======================================================

CCF A类会议 —— AAAI2022 论文审稿模板的更多相关文章

  1. [A类会议] 国内论文检索

    https://www.cn-ki.net/ http://www.koovin.com

  2. [NISPA类会议] 怎样才能在NIPS 上面发论文?

    cp from : https://www.zhihu.com/question/49781124?from=profile_question_card https://www.reddit.com/ ...

  3. [国际A类会议] 2018最最最顶级的人工智能国际峰会汇总!CCF推荐!

    copy from :  http://www.sohu.com/a/201860341_99975651 如果今年的辉煌我们没有赶上,那么我们可以提前为明年的大会做准备.现在,AI脑力波小编就为大家 ...

  4. Latex安装教程(附美赛论文latex模板)

    @ 目录 Latex简介 安装步骤 texlive下载 配置环境变量 配置Texsudio latex版本helloworld 美赛 latex模板 Latex简介 LaTeX(LATEX,音译&qu ...

  5. [Z] 计算机类会议期刊根据引用数排名

    一位cornell的教授做的计算机类期刊会议依据Microsoft Research引用数的排名 link:http://www.cs.cornell.edu/andru/csconf.html Th ...

  6. Machine Learning、Date Mining、IR&NLP 会议期刊论文推荐

    核心期刊排名查询 http://portal.core.edu.au/conf-ranks/ http://portal.core.edu.au/jnl-ranks/ 1.机器学习推荐会议 ICML— ...

  7. CodeSmith生成SQL Server视图的实体类脚本/对应的生成模板

    C#生成sql视图的实体类 using System;using System.Text;using CodeSmith.Engine;using SchemaExplorer;using Syste ...

  8. C++_类入门5-智能指针模板类

    智能指针是行为类似于指针的类对象,但这种对象还有其他功能. 本节介绍三个可帮助管理动态内存分配的智能指针模板(auto_ptr.unique_ptr和shared_ptr). void remodel ...

  9. IDEA设置类注释和方法注释模板

    背景 在日常开发中,类和方法上希望有属于自己风格的注释模板,此文将记录如何设置IDEA类和方法注释模板. 注意:如果公司有统一的规范模板,请按照公司提供的规范模板去设置,这样可以统一代码注释风格.当然 ...

  10. 一个自己稍作修改了的美赛论文 LaTeX 模板

    警告:这是旧版模板的发布页面.本站已经发布了最新版的美赛模板 easymcm(2020 年美赛可用),请到该页面查看: https://www.cnblogs.com/xjtu-blacksmith/ ...

随机推荐

  1. Java与React轻松导出Excel/PDF数据

    前言 在B/S架构中,服务端导出是一种高效的方式.它将导出的逻辑放在服务端,前端仅需发起请求即可.通过在服务端完成导出后,前端再下载文件完成整个导出过程.服务端导出具有许多优点,如数据安全.适用于大规 ...

  2. tempcode排序

    package com.hsy;import com.alibaba.fastjson.JSON;import org.springframework.util.CollectionUtils;imp ...

  3. HDU2062题解 01背包而已

    RT,我就不解释了,题目连接http://acm.hdu.edu.cn/showproblem.php?pid=2602. 初学01背包的人可以做做 #include<iostream> ...

  4. spring与设计模式之四适配器模式

    一.定义 适配器模式-或者称为转接口模式,变压器模式.通过适配,可以让原来提供特定功能的对象完成另外一个标准的功能. 所以,所谓的适配应该可以这样称呼:让某些类/接口适配/转换某个标准/功能. 适配器 ...

  5. 高通Android UEFI XBL 代码流程分析

    高通Android UEFI XBL 代码流程分析 背景 之前学习的lk阶段点亮LCD的流程算是比较经典,但是高通已经推出了很多种基于UEFI方案的启动架构. 所以需要对这块比较新的技术进行学习.在学 ...

  6. STM32 CubeMX 学习:003-定时器

    背景 上一讲 STM32 CubeMX 学习:外部中断的使用 介绍了如何配置以及操作GPIO外部中断. 这一讲我们介绍定时器的有关概念,并对其中一种进行示范. HOST-OS : Windows-10 ...

  7. 全志科技T3国产工业核心板规格书(四核ARM Cortex-A7,主频1.2GHz)

    1 核心板简介 创龙科技SOM-TLT3是一款基于全志科技T3处理器设计的4核ARM Cortex-A7国产工业核心板,每核主频高达1.2GHz. 核心板通过邮票孔连接方式引出CSI.TVIN.MIP ...

  8. 使用flume将数据sink到kafka

    flume采集过程: #说明:案例是flume监听目录/home/hadoop/flume_kafka采集到kafka: 启动集群 启动kafka, 启动agent,flume-ng agent -c ...

  9. SQL Server Wait Statistics监控

    相关描述: https://docs.microsoft.com/en-us/sql/relational-databases/performance-monitor/sql-server-wait- ...

  10. 在VisualStudio中WPF应用程序在打开窗体界面设计时报错<发生了未经处理的异常>的解决方法

    在网上找了一个wpf的开源项目,在打开窗体,点击设计的时候,提示错误信息如下 System.Resources.MissingSatelliteAssemblyExceptionThe satelli ...