Guidance to Authors on Contribution Types
As part of the submission process, authors are asked to select one primary contribution type that best characterizes the main focus of their paper. This selection is intended to help align reviewer expectations with the nature of your work and to support fair, appropriate, and context-aware evaluation.
Your choice does not restrict the content of your paper—it simply signals what you consider to be its central contribution.
Overview of Contribution Types
Please select the category that most accurately reflects the primary contribution of your paper:
1. Algorithms / General
Select this category if your paper’s main contribution is a new method, model, algorithm, etc. in computer vision or related areas. This includes work that advances performance, efficiency, robustness, generality, simplicity, or applicability on established problems.
This category also serves as the default option. If your paper lies on the boundary between multiple contribution types (e.g., it mixes methodological, theoretical, and applied elements), or if you are uncertain which category fits best, Algorithms/General is the appropriate choice. This corresponds to the traditional “standard paper” style of reviewing at major computer vision conferences.
2. Theory / Foundational
Select this category if your paper primarily advances theoretical understanding, formal analysis, or foundational principles of computer vision or machine learning. Suitable papers may include new mathematical frameworks, proofs, formal characterizations, or conceptual insights that deepen understanding of existing methods or phenomena.
Strong empirical results are welcome but not required if the core contribution is theoretical.
3. Applied / Systems
Select this category if your main contribution is a practical system, real-world deployment, large-scale implementation, or engineering solution. Papers in this category typically emphasize practicality, scalability, reliability, efficiency, and real-world impact rather than purely methodological novelty.
If your work’s value lies mainly in integration, deployment, system design, or solving real constraints, this is likely the right category.
4. Datasets / Benchmarks
Select this category if your primary contribution is a new dataset, benchmark, evaluation protocol, or challenge for the community. Your paper should focus on the motivation, design, construction, quality, evaluation of baselines, and utility of the data or benchmark, rather than on a novel method.
If you include a new method, it will generally be treated as a baseline, not the main contribution.
5. Concept & Feasibility
Select this category if your paper introduces a novel idea, paradigm, or research direction with feasibility-level validation rather than full-scale experimental development. This track is intended for original, high-risk/high-reward concepts supported by meaningful preliminary evidence.
This category does not relax rigor—it signals that extensive scaling, exhaustive benchmarks, or production-level deployment are not the primary contribution.
How to Choose When in Doubt
If your paper clearly fits one category, please select it.
If your paper:
- spans multiple categories,
- does not fit neatly into any single specialized track, or
- you are unsure which label is best,
-> please choose Algorithms / General.
This is intentionally designed as the default, boundary-friendly category and will be reviewed using broadly applicable criteria similar to traditional CV conference reviewing.
What Reviewers Will Look For
Reviewer expectations vary by contribution type. We strongly encourage authors to consult the Guidance to Reviewers on Contribution Types available on the conference website before selecting a category:
These guidelines explain in detail:
- what reviewers are instructed to prioritize for each contribution type,
- what aspects should not be penalized, and
- how different kinds of scientific contributions will be evaluated.
Selecting a contribution type that aligns with your paper will help ensure your work is assessed fairly and on the dimensions most relevant to your contribution.
On Possible Reviewer Disagreement with the Selected Contribution Type
Authors should be aware that reviewers may occasionally disagree with the contribution type selected for a submission. In such cases, reviewers are permitted to evaluate the paper through the lens of a different contribution type if they explicitly state and justify this decision in their review.
If this happens, reviewers are expected to clearly explain:
- why they believe the selected contribution type is inappropriate,
- which alternative category they consider more suitable, and
- how this reframing influenced their evaluation.
This justification should appear in the “Justification of Rating” section of the review, so that authors, reviewers, area chairs, senior area chairs, and the program chairs can properly interpret the assessment.
Authors should therefore select the category they genuinely believe best reflects the primary contribution of their work, knowing that any departure from this choice by reviewers must be transparent, reasoned, and clearly documented.
Authors also have the chance to argue against this disagreement in the rebuttal.
If the Program Committee (including PCs, Senior Area Chairs, and Area Chairs) finds the reviewers’ justification satisfactory and agrees that a paper should have been submitted under a different Contribution Type, they may evaluate the paper according to the criteria and requirements of the newly identified Contribution Type. This includes applying any policies specific to that type: e.g., a paper newly categorized as Datasets/Benchmarks will be subject to the Dataset Release Policy.
Availability Requirements for Datasets/Benchmarks Submissions - Dataset Release Policy
By selecting Datasets/Benchmarks as the primary contribution type, authors confirm that the proposed dataset and/or benchmark that are claimed as a contribution of the paper will be publicly available by the time of camera-ready submission.
Together with the camera-ready version of the paper, authors must provide a stable and accessible URL where the dataset and/or benchmark—and any associated code or documentation—can be accessed by the community. This URL must also be included in the final version of the paper. Authors may impose reasonable access restrictions (e.g., requiring completion of an access request form), provided that access is not categorically denied. Any such restrictions must be clearly described in the main submission or supplementary material.
Failure to meet this requirement—specifically, failure to provide a stable and accessible URL that contains the dataset and/or benchmark claimed as a contribution of the paper by the camera-ready submission deadline—will result in removal of the paper from the conference proceedings.
Authors must also clearly specify in the submission any parts of the dataset and/or benchmark that they do not plan to make publicly available under the provided URL (e.g., due to legal, privacy, or ethical constraints).
This disclosure is required so that:
- reviewers can properly assess the paper and do not consider non-public components as contributions, and
- if the paper is accepted, the program committee can verify that the publicly available content at the provided URL matches the authors’ claims.
Any dataset or benchmark components that are not explicitly disclosed as non-public will be assumed to be intended for public release and may be evaluated accordingly. Failure to clearly disclose non-public components may be treated as a violation of the Dataset Availability Policy.
Datasets or Benchmarks That Cannot Be Publicly Released
If authors are unable, due to legal, privacy, or ethical constraints, to make a dataset or benchmark publicly available, then that dataset or benchmark cannot be considered a scientific contribution of the paper (neither primary nor secondary), as it does not constitute a reusable resource for the computer vision community. Authors must not claim such private datasets or benchmarks as contributions, and reviewers will be instructed to verify this.
In such cases, submission to ECCV’26 is still permitted; however, the paper is expected to present substantial additional contributions beyond the private dataset or benchmark itself. These may include, for example:
- a novel method or technical approach,
- a clearly defined and reusable evaluation protocol, and/or
- new scientific observations, analyses, or results derived from the data.
Authors should carefully consider whether the remaining publicly shareable elements of their work still justify selecting Datasets/Benchmarks as the primary contribution type (e.g., if the main contribution is a publicly available evaluation protocol, partial dataset, metadata, or tooling), or whether the true primary contribution lies elsewhere (e.g., in a new method or analysis). In the latter case, authors should select a more appropriate contribution type, such as Algorithms/General.
The intent of this policy is to ensure that papers labeled as Datasets/Benchmarks primarily deliver community-accessible resources, rather than relying on private datasets or benchmarks as their central contribution.
Reviewers are explicitly instructed to verify that submissions claiming a Datasets/Benchmarks contribution type meet these availability requirements and to not consider private datasets or benchmarks as contributions.