Submission Guidelines for Reviewers
The ICMI 2021 Reviewer Tutorial, Guidelines and Examples [1]
Check Your Assigned Papers Carefully
As soon as you get your reviewing assignment, please go through all the papers to make sure that (a) there is no
conflict of interest with you [2] (e.g., paper authored by someone in your institution, a
recent collaborator, or by
someone from whom you have received income) and (b) you are comfortable and able to review any assigned
papers with adequate expertise and impartially. If you have no knowledge of the paper’s content area, do not
accept
to review it. If the above conditions are not met, please respond right away by emailing the Program Chairs so
the
paper in question can be reassigned.
What to Focus on in Reviewing—When to Reject a Paper, and When NOT to Reject
Focus on the paper’s main unique contribution, its potential for impact, its strong points, and what it offers that
is stimulating and novel. ICMI as a conference is looking for new ideas, and it values strong new directions, risk
taking with adequate rationale provided, and multidisciplinarity. A paper that simply replicates past work is likely
to be incremental and have less impact, unless it generalizes past findings and/or focuses on an unusually important
topic.
All papers should be technically sound, without major flaws, and should be written very clearly.
If a paper is not transparent with regard to how the research was conducted, reviewers may challenge its soundness
or its readiness for conducted any review evaluation.
Minor flaws can be corrected, and shouldn't be a reason to reject a paper. Likewise:
- no paper should be rejected for being “out-of-scope” for ICMI because it focuses on unimodal research. While
multimodality (in some form involving input or output) is prioritized, papers that address unimodal research
within the context of multimodality are entirely welcome. For example, they may address topics involving gaze,
speech, non-speech audio, gestures, etc. Such papers are viewed as strongest when they include a discussion of
how they relate to multimodality. However, GUI papers are rarely relevant unless, for example, they present a
control group for contrast.
- no paper should ever be rejected as “out-of-scope” for ICMI because of the particular type of paper,
disciplinary perspective, or scientific methods used. For example, both empirical and systems papers are
welcome. Papers written from different disciplinary perspectives (e.g., social science, engineering, medical or
health sciences, ethics and policy) are very welcome. And papers that use a wide range of research methods are
welcome, including ethnography, interviews, controlled empirical studies, machine and deep learning methods, and
other techniques.
A paper that has been published previously without substantial expansion (i.e., >25% new content), or that is
simultaneously in submission elsewhere, should be discarded out of hand without review. However, if a preprint of the
submission has been placed on arXiv, that is acceptable and not considered a violation of anonymity.
Performing Blind Reviews
Authors were asked to take reasonable efforts to hide their identities, including not listing their names or
affiliations and omitting acknowledgments and funding sources. This information will of course be included in the
published version. Reviewers should also make all efforts to keep their identity invisible to the authors. Reviewers
should not take steps to seek out the authors’ identity and, in the event a preprint of the submission exists on
arXiv, that is acceptable, not a reason to reject a paper, and not considered a breach of anonymity.
Be Specific and Provide Clear Rationale for Your Critical Points
Please be specific and adequately detailed in your reviews. In the discussion of related work and references, simply
saying "this is well known" or "this has been common practice in the industry for years" is not sufficient: cite
specific publications or public disclosures of techniques. Likewise, simply saying “this paper has major technical
flaws” is inadequate unless specific problems are described so they can be evaluated by all reviewers of the paper.
In summary, your review critiques must be justified with specific evidence. Otherwise, your review may be over-ruled
or even rejected by the committee member acting as meta-reviewer, if it is not substantiated and the other reviewers
disagree with it.
Your main critique of the paper should be written to explain the contribution of the paper, along with its main
strength and weaknesses. Stick to your main points. Explain your arguments. Be thorough. Provide adequate discussion
that other reviewers can understand the basis of your critique. Your reviews will be returned to the authors, so you
should include any specific constructive feedback on ways the authors can use to improve their papers. Always be
constructive in your tone. Note that any reviews that are far too brief, inappropriate in tone, fail to be
impartial, recommend rejection for inappropriate reasons (e.g., that conflict with these instructions), or that fail
to provide clear rationale for recommendations risk being discarded by the committee. For more suggestions on
writing your reviews, read the section below on Writing Technical Reviews.
Writing Technical Reviews
Your role is that of a valuable volunteer, who (1) ensures that the best submissions are selected to advance future
research in our professional community, and provides constructive feedback to the authors so they can make
corrections and learn to improve their work. Put yourself in the mindset of writing a review for someone whom you
wish to help, such as a respected colleague who wants your opinion on their work.
Below are some points to strengthen your reviews. Write a review that:
-
you wish someone had written for one of your papers
- is focused on the paper’s main strengths and weaknesses, not extraneous minor issues
-
is adequately lengthy review, at a minimum two or three paragraphs to a page
-
provides a clear description and evidence for any critical claims you make
-
maintains a constructive tone, describing how the paper could be further strengthened (whether it is being
accepted or not)
- maintains your anonymity
-
does not force authors to cite your own papers or papers by your colleagues, unless they are indeed central
and/or seminal reference
-
open to different types of submissions, in terms of type of submission, scientific methods used, disciplinary
perspective maintained, modality topics researched; Do not be parochial about claiming a paper is “out of
scope,” but do be honest in letting an author know if you truly believe that a different publication venue may
provide a better match for the paper
-
has numerical review scores matching your prose critique points
-
Be courteous, informative, incisive, and helpful writing a review that you would be proud to add your name to,
were it not anonymous
Remember that your reviews are read not just by the authors, but also by other reviewers, senior members of the
community acting as Area Chairs, and the Technical Program Chairs. They know your identity and are counting on your
professional input. They also have the latitude to honor your review with a best reviewer award, or to discard it
altogether if judged inappropriate.
This is your professional community, so serve it well. The consistently best reviewers are more likely to be valued,
trusted, and chosen to serve on future program committees when opportunities arise.
In appendix A, we provide clear examples of anonymized ICMI reviews that were (1) worthy of receiving a best
reviewer award; (2) judged inadequate and discarded from consideration.
Ethics for Reviewing Papers: Your Obligation to Protect Ideas
As a reviewer for ICMI, you have the strict responsibility to protect the confidentiality of the ideas represented in
papers you review. ICMI submissions are not published documents, and the work is considered new and proprietary by
the
authors. Individuals and organizations do not consider sending a paper to ICMI for review to constitute a public
disclosure. Protection of the ideas in the papers you receive means you may not:
-
Show the submission (or related videos, images, supplementary documents) under review to anyone else who is not
part of the ICMI review process, including your colleagues and students
-
Use ideas from papers you review to develop your own work
-
Keep copies of submitted papers; They must be destroyed after reviewing is complete
Conditions Causing Potential Reviewing Conflict of Interest:
-
You work at the same institution as one of the authors.
-
You have been directly involved in the work and will be receiving credit in some way, for example as part of the
author's thesis committee, or as a corporate consultant
-
You suspect that others might see a conflict of interest in your involvement. For example, even though Microsoft
Research in Seattle and Beijing are more distant geographically than Berkeley and MIT, they are likely to be
perceived as "both Microsoft.”
-
You have collaborated with one of the authors on a paper, grant, or other major work during the past three
years.
-
You were the MS/PhD advisor of one of the authors, or the MS/PhD advisee of one of the authors. This represents
a lifetime conflict of interest.
Instructions for Reviewing Blue Sky Papers
This new paper track, introduced in 2021, solicits papers relevant to ICMI content that go beyond the usual research
paper to present new visions that stimulate the community to pursue innovative new research directions. They may
challenge existing assumptions and methodologies, or propose new applications or theories. The papers are encouraged
to present high-risk controversial ideas, preferably ones that are potentially high-impact contributions. Submitted
papers are expected to represent deep reflection, to argue rigorously, and to present ideas from a high-level
synthetic viewpoint (e.g., multidisciplinary, based on multiple methodologies). Submissions are 4 pages, independent
of references.
Those who compose the invited review panel for these papers will emphasize judgement based on criteria like: breadth
of knowledge in the field or relevant multidisciplinary fields, creativity or novelty of ideas, provocativeness of
ideas that may run counter to existing assumptions, methods, theory and/or research beliefs, depth of reflection in
considering the topic and its implications, soundness of arguments and critiques, visionary quality of ideas,
written presentation quality, and guidance in pursuing important future research directions. No submission in this
track will be criticized for presenting too unconventional or “wacky” an idea, for presenting an idea that conflicts
with existing research beliefs, or for its political implications per se. High-risk papers presenting entirely novel
ideas or critiques are highly encouraged.
Reviewers must maintain an open mind in considering the potential contributions of submissions in this track. As a
reminder, unimodal as well as multimodal papers are both entirely acceptable at ICMI, as are papers spanning
different methodologies from human behavior studies, to user interaction studies, to system development papers, to
machine learning analyses and evaluation, and so forth. A wide range of research content, methodologies, and style
are entirely within the scope of ICMI. In fact, since Blue Sky papers may introduce entirely new research challenges
for the ICMI community, they will not be judged on whether they are within the scope of ICMI per se.
In summary, the quality of Blue Sky papers will be discussed and debated by the invited panel who compose the Blue
Sky track selection committee.
Appendix A
Award quality review: Anonymized example
The manuscript “(anonymized)” is a very clearly written summary of a subset of perceptual phenomena resulting
from
audio-visual integration in humans. The paper also describes computational models accounting for these phenomena,
and their strengths and weaknesses. The aim of the article is to promote a deeper understanding of the psychology of
human audio-visual integration, its complexity (including interaction effects), and implications for designing
artificial cognitive systems in areas like speech perception, visual perception, object and person tracking, speaker
localization and identification, multimodal biometrics, video annotation, and so forth.
As such, the article addresses an important but under-acknowledged problem in the design of new A.I. systems based on
information fusion. The authors properly frame the problem as one of optimizing performance in a noisy world, and of
humans continuously learning and adapting their multimodal integration weightings based on accumulation of evidence
as they navigate and interact with the world.
When the discussion turns to implications for designing cognitive systems, however, the paper falls short with
respect to being adequately informative. The authors do discuss very generic multimodal system differences in types
of fusion—for example, feature versus decision level approaches, and what general impact they may have on
temporally-demanding processing or responsivity.
However, the authors do not provide what could be a valuable and provocative discussion on the specific implications
of human multimodal integration processes for AI system design. Ideally, this would include (1) an informed
discussion of existing multimodal integration architectures, and their main design characteristics; (2) the
performance accuracy of these systems in different scenarios (e.g., noisy real-world cases); (3) specific
characteristics of human integration processes that differ from those deployed in systems, and what advantages they
may have over system architectures; (4) a walk-through example of a multimodal system architecture and how
processing occurs (for example, in the case of audio-visual speech perception), pointing out suggestions for
integrating human multimodal processing capabilities and testing them in the future, and so forth. That is, this
paper would be far more interesting and impactful if it extracted specific implications for experimenting with
future system design to potentially improve it.
A second issue with the manuscript’s current discussion is a focus on the concept that “illusory percepts” result in
an “imperfect representation of reality” and could have “costs” when designing artificial cognitive systems. This
emphasis implies that there is a canonically correct or optimal multimodal representation, which is problematic. The
evaluation of optimal performance, whether by a human or a system, must be determined from the perspective of
achieving a particular goal or goals, and this will entail tradeoffs.
From a system design viewpoint, this paper could begin by discussing the general classes of AI system being built
today—for example, autonomous systems without human assistance (e.g., self-driving cars) that may emphasize certain
goals, versus AI systems with synergistic human-system interfaces in which the AI component is a tool in service of
achieving human goals. A fully autonomous self-driving car may emphasize conservatism in avoiding all collisions,
whereas a partially-automated car driven by a person may emphasize collision avoidance with animate objects (e.g.,
pedestrians) over inanimate objects (e.g., litter blowing in the wind). In this example, the automated system may
fail to prioritize collision avoidance with the pedestrian over litter if faced with both simultaneously. Clearly,
system design in this case could benefit from better integration of prior knowledge, including human goals and
values, rather than simply construction of multimodal representations that aim to avoid “illusions.” One implication
here is that future AI systems could be more valuable if they were designed to be continuous learning systems, and
not just learning based on A-V perceptual phenomena in association with successfully avoiding all collisions—but
also explicit learning of human goals and values.
Related to point #4 above, it could be useful to take an example like rapid and accurate identification of a moving
piece of litter by an autonomous vehicle, what types of A-V perceptual illusion could potentially occur, and what
the implications could be for designing an autonomous vehicle that avoids “illusions” that risk misidentifying the
object (e.g., as animate) such that it swerves at high speed on a highway to avoid colliding with it.
In other words, your topic is a very important and compelling one for future system design—but to connect with
readers and have impact, you should think through concrete examples of implications for system design. I would also
suggest broadening your discussion of implications somewhat beyond the risk posed by “illusions.” One issue is that
the implication of your concern is that all AI cognitive systems should model human multisensory fusion processes,
which is naïve. They are likely to selectively model some human sensory fusion processes, while in other cases
pursuing new integration models that supersede human abilities. For example, if I was to design a mobile AI
cognitive system for distinguishing edible from poisonous mushrooms, an extremely difficult task for humans, I would
add a sensor for chemical analysis and also weight this more heavily than the visual and contextual processing
humans normally apply. This paper’s discussion should acknowledge that there are information sources that advanced
systems can and will deploy (e.g., lidar) that surpass human sensory abilities—so any fusion involving such data
sources will not model human multisensory perception. Some of these distinctions really need to be made when you
discuss implications for future AI system design, in order to be credible to your readers.
In the above respects, the present consideration of detecting and mitigating illusions in artificial cognitive
systems makes unstated assumptions, does not offer sufficiently concrete implications to be useful, and treats the
topic related to future system design too narrowly. In addition to the above issues, the discussion of what it means
to deploy a “flexible cognitive control mechanism” is presently far too vague to be useful. This paper also could
benefit by describing more clearly what is meant by “reward,” and why it should have an impact on human multisensory
integration and performance. This topic is not treated in enough depth in the present draft.
Additional minor point: Figure 7 and Table 1 should be enlarged.
In summary, this paper raises an important topic, but it needs to work on developing more specific and credible
implications for AI cognitive system design. In doing so, background needs to be provided on the state of AI systems
and their design today, including providing concrete examples. More extensive information also needs to be provided
on current systems’ information fusion processes, how they differ from human multisensory fusion, and how and why
system design could benefit by modeling aspects of human fusion-based processing. Readers also will want to know if
there are documented examples of system failure due to fusion-based processing that produces “illusions” (which
would be interesting)—or whether this article’s discussion is entirely hypothetical.
Review: 3 (possibly accept) Recommend the authors revise their paper to strengthen it in accord with the comments
provided.
Why review is award winning: This review is award winning because it fairly summarizes the paper and its
contents,
and thoughtfully critiques both strengths and weaknesses of the paper. It provides an extended and constructive
explanation of four areas in which the paper could be improved, and how it could be improved including specific
advice
and concrete examples. It then clearly summarizes the main review comments. It also provides a substantial 2-page
review.
Consequences of outstanding reviews: The conference committee notice, appreciate, and discuss the best reviews.
They have more influence on paper reviewing outcomes, because they provide a clear rationale that is more likely to
sway other reviewers. People who consistently provide outstanding reviews are candidates for being invited to join
future conference committees, for example as area chairs. They may also be recognized formally at the conference
with an award for their excellent service.
Inadequate review: Anonymized example: “This submission has major technical flaws, so it should be rejected. It
also is an inappropriate paper for ICMI because it only studies vision (not other modalities) and it has no machine
learning work.” Review: 1 (definitely reject)
Why review is inadequate: This is an inadequate review because (1) it fails to provide evidence for the “major
technical flaws.” In addition, it (2) provides unacceptable reasons for claiming the submission is out of scope for
the conference. The length of this review (2 sentences) further confirms its inadequacy.
Consequences of inadequate reviews:: Inadequate reviews are highly likely to be discarded and not considered
during the review process. Reviewers who submit inadequate reviews also risk being removed from the reviewer
database.
[1] With thanks to the CVPR and CHI conferences, whose guidelines were used as a model.
[2] See list of conflict-of-interest conditions at the end of this document for reference.. |