Euro-Par 2026 Call for Artifacts
The Euro-Par conference series encourages authors of accepted papers to participate in the Artifact Evaluation Process (AEP). The goal of the AEP is to promote transparency, reproducibility, and reusability of research results. Authors of papers accepted for presentation at Euro-Par 2026 will receive an official invitation to submit supplementary materials such as source code, tools, benchmarks, datasets, scripts, and models that support the claims made in their accepted paper.
The primary objective of the evaluation is to assess the reproducibility of the experimental results presented in the paper. It is not required to reproduce every experiment at full scale. However, the submitted artifact must enable the reproduction of at least a significant subset of the reported results, for example, representative figures, a main performance table, or a key comparison discussed in the paper. The artifact should include clear documentation and step-by-step instructions to allow reviewers to execute the experiments and verify the reported outcomes.
All submitted artifacts will undergo an independent review conducted by a dedicated Artifact Evaluation Committee. The evaluation will focus on the quality and completeness of the artifact, the reproducibility of the selected results, and the clarity and usability of the accompanying documentation.
Each artifact will receive a formal assessment indicating whether the evaluation was successful, together with constructive feedback and suggestions for improvement. During the review process, a technical clarification phase will allow reviewers to anonymously request additional information or minor fixes. Authors are expected to respond and address the issues within a few days. Failure to provide timely clarification or to resolve blocking technical problems may result in rejection of the artifact. The outcome of the AEP will not affect the acceptance status of the corresponding paper.
Important Dates
- Artifact submission deadline 8 May 2026 (AoE)
- Technical clarification window 15-22 May 2026
- Author notification 29 May 2026
- Camera-ready of papers with accepted artifact 3 June 2026
Submission Link (submissions not open yet)
Artifact document submissions are handled through EasyChair (Artifact Evaluation Track): https://easychair.org/conferences/?conf=europar2026
GUIDELINES
Responsibility of the Authors
Authors are responsible for making the reproduction process clear, guided, and feasible within reasonable time and resource constraints. The quality of the documentation and the ease of use of the artifact will be explicitly considered during evaluation.
Artifacts that are difficult to understand, poorly documented, or require excessive manual intervention may not receive a positive evaluation, even if the scientific work is sound. The outcome of the artifact evaluation does not affect the acceptance status of the corresponding Euro-Par 2026 paper.
Artifact Size and Data Limits
To ensure a fair and manageable evaluation process, strict size limits apply. The main artifact package, typically provided as a .zip or .tar. gz archive via a Google Drive or Dropbox URL, must not exceed 1 GB.
If additional datasets must be downloaded separately, their total uncompressed size must not exceed 5 GB.
If the original experiments rely on very large datasets, authors must provide reduced versions that allow the evaluation of the workflow and the consistency of the results at a smaller scale.
Submissions that exceed these limits without prior approval from the Artifact Evaluation Chairs will not be considered.
Installation and Resources
For security and portability reasons, artifacts that require root access or administrator privileges for installation or execution will not be considered.
Artifacts must be executable in one of the following ways:
- As user-level software on a standard Linux environment.
- Through a container-based solution such as Docker, provided it can be executed without requiring root privileges on the reviewer side.
- As a preconfigured virtual machine image.
All dependencies must be clearly documented. Automated installation scripts with proper documentation must be provided by the authors.
If the artifact requires specific hardware, such as GPUs or access to HPC clusters, this must be clearly stated in the “Overview Document” (see below). Whenever possible, a simplified configuration that can run on standard hardware should be provided for validation purposes.
In exceptional cases, reviewers may require access to the authors’ computational resources when the artifact cannot be reasonably reproduced on alternative platforms (nor a simplified configuration is possible). If such access is required, authors must provide it through an anonymous account, valid only for the duration of the artifact evaluation. The account must grant only the minimum permissions necessary to run the required experiments.
If temporary access cannot be provided when requested, the artifact will not be accepted.
Submission Guidelines
Each artifact submission must include:
- An “Overview Document” in PDF format containing the information described below.
- The complete source code or binaries necessary to reproduce the selected results.
- Clear installation instructions.
- Step-by-step instructions to execute the experiments.
- A main script, for example run.sh or equivalent, that reproduces the selected results with a minimal number of commands.
- An explicit mapping between the commands to be executed, the generated outputs, and the corresponding figures or tables in the paper.
Artifacts that lack a clear correspondence between their outputs and the results reported in the paper will be penalized.
Overview Document
Each artifact submission must include a concise Overview Document in PDF format. This document guides reviewers through the installation, execution, and validation of the artifact in relation to the results presented in the paper. It should be a few pages long and structured as follows.
1. Getting Started Guide
This section must provide clear setup instructions, including required software, exact versions, environment settings, and configuration steps. A reviewer unfamiliar with the work should be able to complete the setup with reasonable effort.
Include:
- Operating system and version used for development and testing.
- All software dependencies and versions.
- Commands to install or fetch any required components (e.g., package managers, environment modules, container runtimes).
- Basic tests to verify the environment is correctly configured.
In standard cases, the installation and configuration steps should be completed within one hour on a recent workstation or cluster compute node, without requiring manual troubleshooting.
2. Step-by-Step Instructions to Reproduce Results
This section must describe precisely how to reproduce the selected results from the paper.
For each experiment:
- Provide the commands or scripts to execute.
- Indicate expected outputs and file locations.
- Map the execution to specific figures or tables in the paper.
- Report the execution time observed on the reference platform and describe that platform.
If full execution requires long runtimes, provide reduced test cases that complete within a reasonable time while demonstrating the correctness of the method and the consistency of the workflow.
When appropriate:
- Provide expected outputs or logs for comparison.
- Include simple validation checks.
- Explain acceptable variability in performance results due to hardware differences.
The Overview Document must clearly explain how to assess the correctness of the execution and how to interpret the outputs in relation to the claims of the paper. Automated post-processing scripts for generating plots or summary tables are strongly encouraged.
Artifacts that rely on long-running executions without a reasonable reduced alternative, or that require proprietary software not freely available, will not be evaluated.
Execution Time
The experiments required for evaluation must complete within a reasonable time. As a guideline, the full reproduction process should finish in no more than 8 hours. Artifacts whose full experiments exceed the time limit must provide a reduced alternative that completes within the evaluation budget. The reduced alternative must use smaller inputs and or fewer iterations, but it must preserve the full experimental pipeline and allow reviewers to validate the main claims qualitatively, and when feasible, quantitatively. Authors must explain how to interpret the reduced results, including which trends, ratios, or invariants should match the paper, and what deviations are expected due to the reduced scale.
Evaluation Criteria
Artifacts will be evaluated according to the following criteria:
- Reproducibility. The selected results can be reproduced as described.
- Completeness. All necessary components are included.
- Clarity. The documentation is precise and unambiguous.
- Ease of use. The process is automated and robust.
- Compliance. The artifact respects size limits, execution time constraints, and privilege requirements.
The Artifact Evaluation process is independent from the scientific review of the paper and is conducted by a separate Artifact Evaluation Program Committee. The members of this committee are distinct from the main Technical Program Committee and focus exclusively on assessing the reproducibility and technical quality of the submitted artifacts.
Technical Clarification Window
Authors must remain responsive throughout the entire artifact evaluation period. In particular, timely responsiveness during the Technical Clarification Window is mandatory. Failure to respond during this window may result in rejection of the artifact.
During the Technical Clarification Window, authors may be contacted by one or both Artifact Evaluation Chairs. The chairs will act as proxies for the reviewers, meaning that reviewer questions will be forwarded through the chairs rather than the authors being contacted directly.
Authors must ensure that emails from the Artifact Evaluation Chairs are not filtered as spam. It is the authors’ responsibility to monitor the email address used for the artifact submission and to reply within the required time.
Archiving of Accepted Artifacts and DOI Assignment
After acceptance, authors must upload the final version of their artifact to the official Euro-Par 2026 Zenodo community and click “Submit for review.” The Artifact Evaluation Chairs will verify the submission before approving it for publication.
Authors must generate a new DOI through Zenodo during the upload process. Pre-existing DOIs must not be reused. The DOI assigned by Zenodo is the persistent identifier that must be cited in the paper.
The camera-ready version of the corresponding Euro-Par 2026 paper must include:
- An “Artifact Availability” statement, placed at the end of the paper, typically merged with the acknowledgments section, for example, “Acknowledgements and Artifact Availability” or “Artifact Availability”.
- A full bibliographic reference to the artifact’s Zenodo entry in the reference list, including the complete DOI.
Only artifacts submitted through the official Euro-Par 2026 Zenodo community and approved by the chairs will receive final recognition (a seal on the first page of the paper).
Artifact Chairs
- Daniele De Sensi, Sapienza University of Rome, Italy
- Javier García-Blas, Universidad Carlos III de Madrid, Spain
Artifact Evaluation Committee
TBD