7th ECML PKDD International Workshop on

eXplainable Knowledge Discovery in Data Mining and Unlearning

September 19th, 2025

Call for Papers

In recent years, Artificial Intelligence (AI)-driven decision systems have been widely applied in domains such as credit scoring, insurance risk assessment, and health monitoring, where predictive accuracy is critical. While these systems enhance decision-making capabilities, they also introduce ethical and legal challenges, including bias reinforcement, reduced transparency, privacy concerns, and diminished accountability. The complexity and opacity of modern AI models further exacerbate these risks, making it difficult to ensure fairness and compliance with ethical and legal standards.

Most AI systems today rely on Machine Learning algorithms, and the need for ethics and trust in AI has been emphasized through various regulations and guidelines. Frameworks such as the EU’s GDPR mandate the right to "meaningful explanations" of automated decision-making processes, while the AI Act advocates for explainability, transparency, and accountability in AI-driven decision-making.

Despite these efforts, the challenge of developing AI systems that are truly explainable, trustworthy, and compliant with regulatory frameworks remains unresolved. Many methodologies address aspects such as explainability, fairness, and accountability, but a comprehensive and effective framework is still lacking. Addressing these challenges requires collaboration across disciplines, including computer science, law, sociology, and ethics.

XKDD and Beyond is dedicated to advancing research on explainable, transparent, ethical, and fair AI-driven decision systems. This year, the workshop expands its scope to include the emerging topic of unlearning and its intersection with explainable AI (XAI). Unlearning, or the removal of specific knowledge from AI models, is a crucial challenge, especially in contexts requiring compliance with the "right to be forgotten" under GDPR. However, achieving effective unlearning is complex, as model parameters encode learned information in intricate ways.

XAI techniques play a vital role in operationalizing unlearning by providing insights into how decisions are made, identifying the influence of specific data points, and guiding targeted interventions to remove unwanted knowledge while preserving model performance. By integrating explainability with unlearning, AI systems can become more adaptable, accountable, and aligned with ethical and legal standards.

XKDD 2025 invites researchers and practitioners from academia and industry to explore the latest advancements in explainability, trust, and unlearning in AI. Join us in shaping the future of ethical and transparent AI-driven decision-making.

Topics of interest include, but are not limited to:

The call for paper can be dowloaded here.

Submission

Electronic submissions will be handled via CMT.

Papers must be written in English and formatted according to the Springer Lecture Notes in Computer Science (LNCS) guidelines following the style of the main conference (format).

The maximum length of either research or position papers is 16 pages references included. Overlength papers will be rejected without review (papers with smaller page margins and font sizes than specified in the author instructions and set in the style files will also be treated as overlength).

Authors who submit their work to XKDD 2025 commit themselves to present their paper at the workshop in case of acceptance. XKDD 2025 considers the author list submitted with the paper as final. No additions or deletions to this list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.

Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop. Pre-proceedings will be available online before the workshop.

All accepted papers will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science.

Important Dates

  • Paper Submission deadline: June 6th, 2025
  • Accept/Reject Notification: July 14th, 2025
  • Camera-ready deadline: TBD
  • Workshop: September 19th, 2025

Organization

Program Chairs

Invited Speakers

Josep Domingo-Ferrer

Professor at Universitat Rovira i Virgili, Catalonia

TBD

TBD
Josep Domingo-Ferrer is a Distinguished Full Professor of Computer Science and an ICREA-Acadèmia Researcher at Universitat Rovira i Virgili, Tarragona, Catalonia, where he is the founding director of CYBERCAT-Center for Cybersecurity Research of Catalonia. He is also an associated researcher at the VP-IP Chair of Institut Mines-Télécom, Paris, France. He received his BSc-MSc and PhD degrees in Computer Science from the Autonomous University of Barcelona in 1988 and 1991, respectively (Outstanding Graduation Award). He also holds a BSc-MSc in Mathematics from U.N.E.D. (Madrid) and an MA in Philosophy from University of Paris Nanterre. His research interests are in data privacy, data security, statistical disclosure control, and, more generally, ethics-by-design in AI and ICT.

Andreas Theissler

Professor at Aalen University of Applied Sciences, Germany

TBD

TBD
Andreas Theissler is a Professor at Aalen University of Applied Sciences in Germany. He studied Software Engineering and Distributed Computer Systems and began research in Data Science during his Master's thesis in 2006. Since then, he has conducted research and later lectured in the field. Before joining Aalen University, he held various industry positions in Data Science, including roles in the automotive business unit of Bosch. His research and teaching focus on different aspects of Data Science, with a particular emphasis on Machine Learning (ML) and the interaction between ML models and humans, such as Explainable AI, Interactive Machine Learning, and Visual Analytics. His research interests include both the fundamentals of ML and its application in real-world scenarios.

Program Committee

TBD

Program (TBD)

Schedule to be determined

Venue

The event will take place at the ECML-PKDD 2025 Conference TBD .


Additional information about the location can be found at
the main conference web page: ECML-PKDD 2025

Partners

This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 834756 XAI, science and technology for the explanation of ai decision making.

This workshop is partially supported by the European Community H2020 Program under the funding scheme FET Flagship Project Proposal, grant agreement 952026 HumanE-AI-Net.

This workshop is partially supported by the European Community H2020 Program under the funding scheme INFRAIA-2019-1: Research Infrastructures, grant agreement 871042 SoBigData++.

This workshop is partially supported by TANGO. TANGO is a €7M EU-funded Horizon Europe project that aims to develop the theoretical foundations and the computational framework for synergistic human-machine decision making. The 4-year project will pave the way for the next generation of human-centric AI systems. TANGO.

This workshop is partially supported by the European Community NextGenerationEU programme under the funding schemes PNRR-PE-AI FAIR (Future Artificial Intelligence Research). FAIR.

The XKDD event was organised as part of the SoBigData.it project (Prot. IR0000013 - Call n. 3264 of 12/28/2021) initiatives aimed at training new users and communities in the usage of the research infrastructure (SoBigData.eu). “SoBigData.it receives funding from European Union – NextGenerationEU – National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) – Project: “SoBigData.it – Strengthening the Italian RI for Social Mining and Big Data Analytics” – Prot. IR0000013 – Avviso n. 3264 del 28/12/2021.” SoBigData.it.

Contacts

All inquires should be sent to

francesca.naretto@unipi.it

francesco.spinnato@di.unipi.it