=======================================================

前段时间为实验室负责审理AAAI 2022的会议稿件,感觉这个审稿模板还是不错的,这里保存一下,以后审理其他期刊、会议的时候可以参考这个模板。

========================================

Thank you for your contribution to AAAI 2022.

Edit Review

Paper ID

Paper Title

Track

Main Track

REVIEW QUESTIONS

 1.  {Summary} Please briefly summarize the main claims/contributions of the paper in your own words. (Please do not include your evaluation of the paper here). * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

2.  {Novelty} How novel are the concepts, problems addressed, or methods introduced in the paper? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The main ideas of the paper are ground-breaking.
  • Good: The paper makes non-trivial advances over the current state-of-the-art.
  • Fair: The paper contributes some new ideas.
  • Poor: The main ideas of the paper are not novel or represent incremental advances.

3.  {Soundness} Is the paper technically sound?(visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta- reviewers)

  • Excellent: I am confident that the paper is technically sound, and I have carefully checked the details.
  • Good: The paper appears to be technically sound, but I have not carefully checked the details.
  • Fair: The paper has minor, easily fixable, technical flaws that do not impact the validity of the main results.
  • Poor: The paper has major technical flaws.

4.  {Impact} How do you rate the likely impact of the paper on the AI research community? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The paper is likely to have high impact across more than one subfield of AI.
  • Good: The paper is likely to have high impact within a subfield of AI OR moderate impact across more than one subfield of AI.
  • Fair: The paper is likely to have moderate impact within a subfield of AI.
  • Poor: The paper is likely to have minimal impact on AI.

5.  {Clarity} Is the paper well-organized and clearly written? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The paper is well-organized and clearly written.
  • Good: The paper is well organized but the presentation could be improved.
  • Fair: The paper is somewhat clear, but some important details are missing or unclear.
  • Poor: The paper is unclear and very hard to understand.

6.  {Evaluation} If applicable, are the main claims well supported by experiments? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The experimental evaluation is comprehensive and the results are compelling.
  • Good: The experimental evaluation is adequate, and the results convincingly support the main claims.
  • Fair: The experimental evaluation is weak: important baselines are missing, or the results do not adequately support the main claims.
  • Poor: The experimental evaluation is flawed or the results fail to adequately support the main claims.
  • Not applicable: The paper does not present an experimental evaluation (the main focus of the paper is theoretical).

7.  {Resources} If applicable, how would you rate the new resources (code, data sets) the paper contributes? (It might help to consult the paper’s reproducibility checklist) * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The shared resources are likely to have a broad impact on one or more sub-areas of AI.
  • Good: The shared resources are likely to be very useful to other AI researchers.
  • Fair: The shared resources are likely to be moderately useful to other AI researchers.
  • Poor: The shared resources are unlikely to be useful to other AI researchers.
  • Not applicable: For instance, the primary contributions of the paper are theoretical.

8.  {Reproducibility} Are the results (e.g., theorems, experimental results) in the paper easily reproducible? (It may help to consult the paper’s reproducibility checklist.) * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: key resources (e.g., proofs, code, data) are available and key details (e.g., proof sketches, experimental setup) are comprehensively described for competent researchers to confidently and easily reproduce the main results.
  • Good: key resources (e.g., proofs, code, data) are available and key details (e.g., proofs, experimental setup) are sufficiently well-described for competent researchers to confidently reproduce the main results.
  • Fair: key resources (e.g., proofs, code, data) are unavailable but key details (e.g., proof sketches, experimental setup) are sufficiently well-described for an expert to confidently reproduce the main results.
  • Poor: key details (e.g., proof sketches, experimental setup) are incomplete/unclear, or key resources (e.g., proofs, code, data) are unavailable.

9.  {Ethical Considerations} Does the paper adequately address the applicable ethical considerations, e.g., responsible data collection and use (e.g., informed consent, privacy), possible societal harm (e.g., exacerbating injustice or discrimination due to algorithmic bias), etc.? * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

  • Excellent: The paper comprehensively addresses all of the applicable ethical considerations.
  • Good: The paper adequately addresses most, but not all, of the applicable ethical considerations.
  • Fair: The paper addresses some but not all of the applicable ethical considerations.
  • Poor: The paper fails to address most of the applicable ethical considerations.
  • Not Applicable: The paper does not have any ethical considerations to address.

10.  {Reasons to Accept} Please list the key strengths of the paper (explain and summarize your rationale for your evaluations with respect to questions 1-9 above). * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

11.  {Reasons to Reject} Please list the key weaknesses of the paper (explain and summarize your rationale for your evaluations with respect to questions 1-9 above). * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

{Questions for the Authors} Please provide questions that you would like the authors to answer during the author feedback period. Please number them. * (visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

13.  {Detailed Feedback for the Authors} Please provide other detailed, constructive, feedback to the authors. *

(visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

14.  (OVERALL EVALUATION) Please provide your overall evaluation of the paper, carefully weighing the reasons to accept and the reasons to reject the paper. Ideally, we should have:

No more than 25% of the submitted papers in (Accept + Strong Accept + Very Strong Accept + Award Quality) categories;

No more than 20% of the submitted papers in (Strong Accept + Very Strong Accept + Award Quality) categories;

No more than 10% of the submitted papers in (Very Strong Accept + Award Quality) categories;

No more than 1% of the submitted papers in the Award Quality category;

(visible to authors during feedback, visible to authors after notification, visible to other reviewers, visible to meta- reviewers)

  • Award quality: Technically flawless paper with groundbreaking impact on one or more areas of AI, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations.
  • Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI and excellent impact on multiple areas of AI, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
  • Strong Accept: Technically strong paper with, with novel ideas, excellent impact on at least one area of AI or high to excellent impact on multiple areas of AI, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
  • Accept: Technically solid paper, with high impact on at least one sub-area of AI or moderate to high impact on more than one area of AI, with good to excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
  • Weak Accept: Technically solid, moderate to high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
  • Borderline accept: Technically solid paper where reasons to accept, e.g., novelty, outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
  • Borderline reject: Technically solid paper where reasons to reject, e.g., lack of novelty, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
  • Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility, incompletely addressed ethical considerations.
  • Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility, mostly unaddressed ethical considerations.
  • Very Strong Reject: For instance, a paper with trivial results, limited novelty, poor impact, or unaddressed ethical considerations.

15.  (CONFIDENCE) How confident are you in your evaluation? * (visible to other reviewers, visible to meta-reviewers)

  • Very confident. I have checked all points of the paper carefully. I am certain I did not miss any aspects that could otherwise have impacted my evaluation.
  • Quite confident. I tried to check the important points carefully. It is unlikely, though conceivable, that I missed some aspects that could otherwise have impacted my evaluation.
  • Somewhat confident, but there's a chance I missed some aspects. I did not carefully check some of the details, e.g., novelty, proof of a theorem, experimental design, or statistical validity of conclusions.
  • Not very confident. I am able to defend my evaluation of some aspects of the paper, but it is quite likely that I missed or did not understand some key details, or can't be sure about the novelty of the work.
  • Not confident. My evaluation is an educated guess.

16.  {Confidence-Justification} Please provide a justification for your confidence (only visible to SPC, AC, and Program Chairs). * (visible to meta-reviewers)

8000 characters left

17.  (EXPERTISE) How well does this paper align with your expertise? * (visible to other reviewers, visible to meta- reviewers)

  • Expert: This paper is within my current core research focus and I am deeply knowledgeable about all of the topics covered by the paper.
  • Very Knowledgeable: This paper significantly overlaps with my current work and I am very knowledgeable about most of the topics covered by the paper.
  • Knowledgeable: This paper has some overlap with my current work. My recent work was focused on closely related topics and I am knowledgeable about most of the topics covered by the paper.
  • Mostly Knowledgeable: This paper has little overlap with my current work. My past work was focused on related topics and I am knowledgeable or somewhat knowledgeable about most of the topics covered by the paper.
  • Somewhat Knowledgeable: This paper has little overlap with my current work. I am somewhat knowledgeable about some of the topics covered by the paper.
  • Not Knowledgeable: I have little knowledge about most of the topics covered by the paper.

18.  {Expertise-Justification} Please provide a justification for your expertise (only visible to SPC, AC, and Program Chairs). * (visible to meta-reviewers)

8000 characters left

19.  Confidential comments to SPC, AC, and Program Chairs (visible to meta-reviewers)

20.  I acknowledge that I have read the author's rebuttal and made whatever changes to my review where necessary.

 (visible to authors after notification, visible to other reviewers, visible to meta-reviewers)

 

=======================================================

CCF A类会议 —— AAAI2022 论文审稿模板的更多相关文章

  1. [A类会议] 国内论文检索

    https://www.cn-ki.net/ http://www.koovin.com

  2. [NISPA类会议] 怎样才能在NIPS 上面发论文?

    cp from : https://www.zhihu.com/question/49781124?from=profile_question_card https://www.reddit.com/ ...

  3. [国际A类会议] 2018最最最顶级的人工智能国际峰会汇总!CCF推荐!

    copy from :  http://www.sohu.com/a/201860341_99975651 如果今年的辉煌我们没有赶上,那么我们可以提前为明年的大会做准备.现在,AI脑力波小编就为大家 ...

  4. Latex安装教程(附美赛论文latex模板)

    @ 目录 Latex简介 安装步骤 texlive下载 配置环境变量 配置Texsudio latex版本helloworld 美赛 latex模板 Latex简介 LaTeX(LATEX,音译&qu ...

  5. [Z] 计算机类会议期刊根据引用数排名

    一位cornell的教授做的计算机类期刊会议依据Microsoft Research引用数的排名 link:http://www.cs.cornell.edu/andru/csconf.html Th ...

  6. Machine Learning、Date Mining、IR&NLP 会议期刊论文推荐

    核心期刊排名查询 http://portal.core.edu.au/conf-ranks/ http://portal.core.edu.au/jnl-ranks/ 1.机器学习推荐会议 ICML— ...

  7. CodeSmith生成SQL Server视图的实体类脚本/对应的生成模板

    C#生成sql视图的实体类 using System;using System.Text;using CodeSmith.Engine;using SchemaExplorer;using Syste ...

  8. C++_类入门5-智能指针模板类

    智能指针是行为类似于指针的类对象,但这种对象还有其他功能. 本节介绍三个可帮助管理动态内存分配的智能指针模板(auto_ptr.unique_ptr和shared_ptr). void remodel ...

  9. IDEA设置类注释和方法注释模板

    背景 在日常开发中,类和方法上希望有属于自己风格的注释模板,此文将记录如何设置IDEA类和方法注释模板. 注意:如果公司有统一的规范模板,请按照公司提供的规范模板去设置,这样可以统一代码注释风格.当然 ...

  10. 一个自己稍作修改了的美赛论文 LaTeX 模板

    警告:这是旧版模板的发布页面.本站已经发布了最新版的美赛模板 easymcm(2020 年美赛可用),请到该页面查看: https://www.cnblogs.com/xjtu-blacksmith/ ...

随机推荐

  1. SHOW PROCESSLIST 最多能显示多长的 SQL?

    在 MySQL 中,如果我们想查看实例当前正在执行的 SQL,常用的命令是SHOW PROCESSLIST. 但如果 SQL 过长的话,就会被截断.这时,我们一般会用SHOW FULL PROCESS ...

  2. 3年Java阿里跳字节的面试心得总结

    中厂->阿里->字节,成都->杭州->成都 系列文章目录和关于我 0.前言 笔者在不足两年经验的时候从成都一家金融科技中厂跳槽到杭州阿里淘天集团,又于今年5月份从杭州淘天跳槽到 ...

  3. 洛谷 P5595 歌唱比赛

    题目链接:歌唱比赛 思路 根据题目分析可得,假如小x的点赞数是123111,小y的点赞数是234111,则字符串的第4为到第6位结果都为Z,分别为对比(111,111),(11,11),(1,1),字 ...

  4. Linux信号量

    查看信号量 [root@localhost ~]# kill -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) ...

  5. Django-缓存、信号与序列化

    缓存 1.缓存的简介 在动态网站中,用户所有的请求,服务器都会去数据库中进行相应的增,删,查,改,渲染模板,执行业务逻辑,最后生成用户看到的页面. 当一个网站的用户访问量很大的时候,每一次的的后台操作 ...

  6. ClickHouse介绍(四)ClickHouse使用操作

    ClickHouse使用操作 这章主要介绍在ClickHouse使用的各个操作的注意点.常规的统一语法不做详细介绍. 1. Join操作 在ClickHouse中,对连接操作定义了不同的精度,包含AL ...

  7. c语言之位段

    百度百科链接 示例: 1 struct CHAR 2 { 3 unsigned int ch : 8; //8位 4 unsigned int font : 6; //6位 5 unsigned in ...

  8. C# 获取指定年月的第一天和最后一天、获取本月的第一天和最后一天、获取当前日期的星期几等

    •获取指定年月的第一天 public static DateTime GetCurMonthFirstDay(string year,string mon) { DateTime AssemblDat ...

  9. 妙趣横生:利用Echarts实现SpreadJS引用从属关系的可视化魅力

    最新技术资源(建议收藏) https://www.grapecity.com.cn/resources/ 在金融行业,我们经常会有审计审查的需求,对某个计算结果进行审查,但是这个计算结果可能依赖多个单 ...

  10. webpack4.15.1 学习笔记(五) — 生产环境构建

    目录 生产环境构建 指定环境 生产环境构建 development和production的构建目标差异很大.dev中,需要具有实时重新加载或HMR能力的 source map 和 server.而在p ...