|
CLEF 2025 - Call for Papers
CLEF 2025 Conference and Labs of the Evaluation Forum
Information Access Evaluation meets Multilinguality, Multimodality, and Visualization
9-12 September 2025, Madrid, Spain
Good to Know
The CLEF 2025 Conference welcomes papers in the Information Access domain that describe rigorous hypothesis testing regardless of whether the results are positive or negative. Each submission is reviewed in two stages, see details below.
Important Dates (AoE)
6 May 2025 11 May 2025: Abstract submission of Long, Short (through any of the track entries in EasyChair)
13 May 2025 20 May 2025: Full paper submission deadline (Long, Short)
- 21 May 2025: Abstract submission of Best of 2024 Labs papers
- 27 May 2025: Best of 2024 Labs paper submission
10 June 2025 11 June 2025: Notification of acceptance (Long, Short, Best of 2024 Labs)
- 23 June 2025: Camera-ready version due (Long, Short, Best of Labs, Overviews)
- 9-12 September 2025: Conference
Aim and Scope
The CLEF Conference addresses all aspects of Information Access in any modality and language. The CLEF conference consists of presentation of research papers and a series of workshops presenting the results of lab-based comparative evaluation benchmarks.
CLEF 2025 is the 16th CLEF conference continuing the popular CLEF campaigns which have run since 2000 contributing to the systematic evaluation of information access systems, primarily through experimentation on shared tasks.
The CLEF conference has a clear focus on experimental Information Retrieval (IR) as carried out within evaluation forums (e.g., CLEF Labs, TREC, NTCIR, FIRE, MediaEval, RomIP, SemEval, and TAC) with special attention to the challenges of multimodality, multilinguality, and interactive search in different domains, also considering specific classes of users as children, students, impaired users in different tasks (e.g., academic, professional, or everyday-life).
We invite paper submissions on significant new insights demonstrated on IR test collections, on analysis of IR test collections and evaluation measures, as well as on concrete proposals to push the boundaries of the Cranfield style evaluation paradigm.
All submissions to the CLEF main conference will be reviewed on the basis of relevance, originality, importance, and clarity. CLEF welcomes papers that describe rigorous hypothesis testing regardless of whether the results are positive or negative. CLEF also welcomes past runs/results/data analysis and new data collections. Methods are expected to be written so that they are reproducible by others, and the logic of the research design should be clearly described in the paper. The conference proceedings will be published in the Springer Lecture Notes in Computer Science (LNCS).
Topics
Relevant topics for the CLEF 2025 Conference include but are not limited to:
- Information access in any language or modality: information retrieval, image retrieval, question answering, search interfaces and design, infrastructures, etc.
- Analytics for information retrieval: theoretical and practical results in the analytics field that are specifically targeted for information access data analysis, data enrichment, etc.
- Reproducibility and replicability issues: analyses of past results/run deep analysis both statistically and fine grain based.
- Language diversity: work on low-resource languages.
- Models leveraging collaborative and social data and their evaluation.
- User studies either based on lab studies or crowdsourcing.
- Evaluation initiatives: conclusions, lessons learned, impact and projection of any evaluation initiative after completing their cycle.
- Evaluation: methodologies, metrics, statistical and analytical tools, component based, user groups and use cases, ground-truth creation, impact of multilingual/multicultural/multimodal differences, etc.
- Technology transfer: economic impact/sustainability of information access approaches, deployment and exploitation of systems, use cases, etc.
- Interactive and Conversational Information Retrieval evaluation: the interactive/conversational evaluation of information retrieval systems using user-centered methods, evaluation of novel search interfaces, novel interactive/conversational evaluation methods, simulation of interaction/conversation, etc.
- Specific application domains: information access and its evaluation in application domains such as cultural heritage, digital libraries, social media, health information, legal documents, patents, news, books, and in the form of text, audio and/or image data.
- New data collection: presentation of new data collections with potential high impact on future research, specific collections from companies or labs, multilingual collections.
Format
Authors are invited to electronically submit original papers, which have not been published and are not under consideration elsewhere, using the LNCS proceedings format.
Two categories of papers will be accepted:
- Long papers: 12 pages plus references
- Short papers: 6 pages plus references
Papers should be anonymous, sharing code and data with reviewers should be done via anonymous repositories like, for example, https://anonymous.4open.science/.
Review Process
Papers will be peer-reviewed by 3 members of the programme committee in two stages. At the first stage, the members will review the originality, clarity, technical & theoretical soundness, and methodology of the paper. At the second stage, the complete manuscripts that passed the first stage will be reviewed. At this stage, reviewers will also look at the reproducibility of the work.
Authors of long and short papers are asked to submit the following TWO versions of their manuscript, the second one being the complete version (12/6 pages + references), the first one restricted to the methodology content only:
- Methodology version (restricted): This version does NOT report anything related to the results of the study. At this stage, the manuscripts will be evaluated based on the importance of the problem addressed and the soundness of the methodology. Manuscripts can include an introduction, description of the proposed methodology and datasets used. However, there should be no result and discussion sections. The authors should also remove mentions of results in the included sections (e.g. abstract, introduction).
- Experimental version (complete): The complete manuscript that contains all the sections of the paper including the experiments and results.
The deadline for the submission of both versions is the 20th of May 2025.
Paper Submission
Papers should be submitted in PDF format at this link.
- Submit the methodology/restricted version at the "Conference -> Methodology Part" track.
- Submit the experimental/complete version at the "Conference -> Experimental Part" track.
- Submit the best of CLEF 2024 Labs version at the "Conference -> Best of CLEF 2024 Labs" track.
Organisation
General Chairs
- Laura Plaza, Universidad Nacional de Educación a Distancia (UNED), Spain
- Jorge Carrillo-de-Albornoz, Universidad Nacional de Educación a Distancia (UNED), Spain
- Julio Gonzalo, Universidad Nacional de Educación a Distancia (UNED), Spain
- Alba García Seco de Herrera, Universidad Nacional de Educación a Distancia (UNED), Spain
Program Chairs
- Josiane Mothe, Université de Toulouse, France
- Florina Piroi, Technische Universität Wien, Austria
Lab Chairs
- Paolo Rosso, Universitat Politècnica de València, Spain
- Damiano Spina, RMIT University, Australia
|
|