Skip to yearly menu bar Skip to main content


Guidance to Area Chairs on Contribution Types

 

As part of the submission process, authors select a primary Contribution Type for their paper among: Algorithms/General, Theory/Foundational, Applied/Systems, Datasets/Benchmarks, or Concept &​ Feasibility (see also Guidance to Authors on Contribution Types). This selection is intended to signal the nature of the paper's main contribution and to ensure that it is evaluated using criteria appropriate for that type of work.

Reviewers receive contribution-type-specific instructions (see also Guidance to Reviewers on Contribution Types) and are asked to assess each paper primarily through the lens of the author's declared category. These instructions emphasize different aspects depending on the type of contribution (e.g., methodological novelty for algorithms, data quality for datasets, theoretical insight for foundational work, or feasibility and originality for concept papers).

When making recommendations and final decisions, Area Chairs (ACs) should explicitly take the selected Contribution Type into account by:

  • Interpreting reviews in the context of the relevant contribution-specific criteria,
  • Weighing reviewer concerns with an understanding of what should and should not be prioritized for the Contribution Type of a paper, and
  • Avoiding penalizing papers for lacking features that are not central to their declared contribution type.

If reviewers disagree with the author's chosen Contribution Type and evaluate the paper under a different framing, this must be clearly justified in their reviews. In such cases, Area Chairs should carefully consider whether this disagreement is reasonable and how it affects the interpretation of the reviews and the overall assessment of the paper.

More generally, ACs are encouraged to treat Contribution Types not as rigid tracks, but as a tool for fairer, more context-aware evaluation, recognizing that high-quality research can take multiple forms within the ECCV community.

For FAQs, jump here.
 

Access to Reviewer Instructions

Area Chairs are encouraged to familiarize themselves with the Reviewer Guidelines for each Contribution Type, as these define the expectations and criteria that reviewers are explicitly asked to apply during the evaluation process. These guidelines provide important context for interpreting reviews, especially when comparing papers across different Contribution Types or assessing reviewer concerns.

The complete Guidance to Reviewers on Contribution Types are available here.

Understanding how reviewers were guided will help Area Chairs better calibrate their own assessments, resolve discrepancies between reviews, and make more informed and consistent final recommendations.
 

Guidance to Area Chairs on Reviewer Matching and Contribution Types

When suggesting or assigning reviewers in OpenReview, Area Chairs will be able to see both:

  1. The Contribution Type selected by the authors for each paper, and
  2. (potentially multiple) Contribution Type preferences indicated by reviewers during registration (with Algorithms/General as the default).

Reviewer contribution-type preferences should be viewed as one useful signal among many, rather than a strict requirement for matching. They can help ACs reason about potential fit between papers and reviewers, but they are not definitive or binding.

In practice, ACs are encouraged to consider the following:

  • For papers with more specialized Contribution Types (e.g., Theory/Foundational, Datasets/Benchmarks, Applied/Systems, or Concept &​ Feasibility), ACs should include reviewers who have expressed interest in that same type, where possible and appropriate, provided they also have relevant subject-matter expertise.
  • At the same time, reviewer Contribution Type preferences may be noisy, incomplete, or overly broad, and should not override clear subject-matter expertise.

ACs should continue to prioritize:

  • topical expertise, and
  • diversity of perspectives,

While using Contribution Type information as an additional factor that can improve matching and reduce expectation mismatches. Concretely, for the example of a Theory/Foundational submission, do not suggest a reviewer solely because they indicated that they prefer to review Theory/Foundational papers if they do not have experience in the subject area of the submission.

The intent of this system is to support, not constrain, AC judgment: contribution-type signals can guide reviewer selection, but ACs should feel comfortable deviating from them when doing so can lead to more information and/or more balanced reviews.
 

FAQs for Area Chairs
 

Q. What are “Contribution Types” and why do we use them?

A. Contribution Types (CTs) are meant to support fair, context-aware evaluation across diverse kinds of computer vision research (methods, theory, systems, datasets, early-stage ideas). Authors pick one primary type to signal the paper’s central contribution; reviewers and ACs should interpret the work through that lens.

 

Q. Are CTs separate tracks?

A. NO, the CTs are not separate tracks.

 

Q. How should ACs use CTs for reviewer matching?

A. CT preferences are one signal among many and are not binding. For specialized types (Theory / Datasets / Applied / Concept), it can help to include reviewers who expressed interest in that type—but never at the expense of subject-matter expertise. Continue to prioritize topical expertise and diversity of perspectives.

 

Q. How are the reviewers instructed to use the CTs?

A. The reviewers are instructed to review each submission according to its CT. See the reviewing guidelines here.

 

Q. How should ACs use Contribution Types when interpreting reviews and making recommendations?

A. ACs should explicitly take the selected type into account by: interpreting reviews in the context of type-specific criteria, weighing concerns based on what should/shouldn’t be prioritized for that type, and avoiding penalties for non-central dimensions.
 

Q. If reviewers re-frame the Contribution Type, what should the AC do?

A. Ensure the reviewer’s disagreement is clearly justified (as required), then judge whether the re-framing is reasonable and how it affects the overall assessment.