In recent years, Artificial Intelligence (AI)-driven decision systems have been widely applied in domains such as credit scoring, insurance risk assessment, and health monitoring, where predictive accuracy is critical. While these systems enhance decision-making capabilities, they also introduce ethical and legal challenges, including bias reinforcement, reduced transparency, privacy concerns, and diminished accountability. The complexity and opacity of modern AI models further exacerbate these risks, making it difficult to ensure fairness and compliance with ethical and legal standards.
Most AI systems today rely on Machine Learning algorithms, and the need for ethics and trust in AI has been emphasized through various regulations and guidelines. Frameworks such as the EU’s GDPR mandate the right to "meaningful explanations" of automated decision-making processes, while the AI Act advocates for explainability, transparency, and accountability in AI-driven decision-making.
Despite these efforts, the challenge of developing AI systems that are truly explainable, trustworthy, and compliant with regulatory frameworks remains unresolved. Many methodologies address aspects such as explainability, fairness, and accountability, but a comprehensive and effective framework is still lacking. Addressing these challenges requires collaboration across disciplines, including computer science, law, sociology, and ethics.
XKDD and Beyond is dedicated to advancing research on explainable, transparent, ethical, and fair AI-driven decision systems. This year, the workshop expands its scope to include the emerging topic of unlearning and its intersection with explainable AI (XAI). Unlearning, or the removal of specific knowledge from AI models, is a crucial challenge, especially in contexts requiring compliance with the "right to be forgotten" under GDPR. However, achieving effective unlearning is complex, as model parameters encode learned information in intricate ways.
XAI techniques play a vital role in operationalizing unlearning by providing insights into how decisions are made, identifying the influence of specific data points, and guiding targeted interventions to remove unwanted knowledge while preserving model performance. By integrating explainability with unlearning, AI systems can become more adaptable, accountable, and aligned with ethical and legal standards.
XKDD 2025 invites researchers and practitioners from academia and industry to explore the latest advancements in explainability, trust, and unlearning in AI. Join us in shaping the future of ethical and transparent AI-driven decision-making.
Topics of interest include, but are not limited to:
The call for paper can be dowloaded here.
Electronic submissions will be handled via CMT.
Papers must be written in English and formatted according to the Springer Lecture Notes in Computer Science (LNCS) guidelines following the style of the main conference (format).
The maximum length of either research or position papers is 16 pages references included. Overlength papers will be rejected without review (papers with smaller page margins and font sizes than specified in the author instructions and set in the style files will also be treated as overlength).
We also accept 2-4 pages abstracts (including references) that outline new emerging ideas and/or already published work for presentation-only, to stimulate discussion and collaboration among participants
Authors who submit their work to XKDD 2025 commit themselves to present their paper at the workshop in case of acceptance. XKDD 2025 considers the author list submitted with the paper as final. No additions or deletions to this list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.
Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop. Pre-proceedings will be available online before the workshop.
All accepted full papers will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science.
All deadlines expire on 23:59 AoE.
Forgetting, Unlearning and Explainability in Large Language Models
Prof. Josep Domingo Ferrer
What Should LLMs Forget? Quantifying Personal Data in LLMs for Right-to-Be-Forgotten Requests
Dimitri Staufer
The Right to be Forgotten in the Age of AI: Legal, Philosophical, and Technical Challenges
Francesca Naretto; Anna Monreale; Josep Domingo-Ferrer; Alessandro Bucci
Generative Example-Based Explanations: Bridging the Gap between Generative Modeling and Explainability
Philipp Vaeth; Alexander Fruehwald; Benjamin Paassen; Magda Gregorova
TT-XAI: Trustworthy Clinical Text Explanations via Keyword Distillation and LLM Reasoning
Kristian Miok; Blaz Skrlj; Daniela Zaharie; Marko Robnik-Sikonja
Concept-AIME: A Dual Inverse-Model Framework for Concept-Level Global and Local Explanations of Black-Box Predictors
Takafumi Nakanishi
From Explainable AI to Model Diagnosis: A Framework and Comparative Study of Human and ML-Based Explanation Diagnosis
Tobias Gentner; Felix Gerschner; Andreas Theissler
A Concept-based approach to Voice Disorder Detection
Davide Ghia; Gabriele Ciravegna; Alkis Koudounas; Marco Fantini; Erika Crosetti; Giovanni Succo; Tania Cerquitelli
Rotation- and Scale-Invariant Shape Extraction from Vessel Trajectories for Human-In-The-Loop Monitoring: a case study at the Ports of Brittany
Cristiano Landi; Natalia Andrienko; Gennady Andrienko
KL-Guided Concept-Based Learning for Explainable Classification
Rim EL CHEIKH; Issam FALIH; Engelbert MEPHU NGUIFO
Rule vs. SHAP: Complementary Tools for Understanding and Verifying ML Models
Bahavathy Kathirgamanathan; Gennady Andrienko; Natalia Andrienko
The event will take place at the ECML-PKDD 2025 Conference, Room TBD.
Additional information about the location can be found at
the main conference web page: ECML-PKDD 2025
This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 834756 XAI, science and technology for the explanation of ai decision making.
This workshop is partially supported by TANGO. TANGO is a €7M EU-funded Horizon Europe project that aims to develop the theoretical foundations and the computational framework for synergistic human-machine decision making. The 4-year project will pave the way for the next generation of human-centric AI systems. TANGO.
This workshop is partially supported by the European Community NextGenerationEU programme under the funding schemes PNRR-PE-AI FAIR (Future Artificial Intelligence Research). FAIR.
This workshop has been partially supported by the Italian Project Fondo Italiano per la Scienza FIS00001966 ``MIMOSA''. MIMOSA.
The XKDD event was organised as part of the SoBigData.it project (Prot. IR0000013 - Call n. 3264 of 12/28/2021) initiatives aimed at training new users and communities in the usage of the research infrastructure (SoBigData.eu). “SoBigData.it receives funding from European Union – NextGenerationEU – National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) – Project: “SoBigData.it – Strengthening the Italian RI for Social Mining and Big Data Analytics” – Prot. IR0000013 – Avviso n. 3264 del 28/12/2021.” SoBigData.it.
All inquires should be sent to
francesca.naretto@unipi.it
francesco.spinnato@di.unipi.it