Summary and Scope
Source tracing is the digital forensic process of attributing synthetic or manipulated audio to its generative origin. It seeks to answer a critical question: “Which specific source (TTS/VC) system created this audio deepfake?” By identifying the source, whether it be Vendor A, Vendor B, or an open-source model-platform, providers and authorities can take decisive action, such as closing malicious accounts, tracking the spread of coordinated disinformation campaigns, and improving the accountability of generative AI.
However, many current models rely on “shortcuts”-spurious correlations like speaker identity, language, or recording conditions. While these models may perform perfectly in laboratory environments, they often fail in real-world scenarios and do not generalize to new conditions or unseen generative models. This special session explicitly addresses these challenges. We aim to foster the development of fair and robust attribution methods that capture genuine system-specific fingerprints and demonstrate real-world generalization.
Submission Information
Deadline: March 15th, 2026
Subject Area 6.02 – Model fairness meets source tracing: Toward trustworthy AI for manipulated speech attribution
CMT: https://cmt3.research.microsoft.com/ODYSSEY2026/
Templates: https://odyssey2026.inesc-id.pt/preparation-guidelines-and-templates/
Organizers
Nicolas Müller (Fraunhofer AISEC / Resemble AI)
Hemlata Tak (Pindrop, USA)
Adriana Stan (Technical Univ. of Cluj-Napoca)
Jennifer Williams (Univ. of Southampton, UK)
Piotr Kawa (WUST / Resemble AI)
Xin Wang (National Inst. of Informatics, Japan)
Jagabandhu Mishra (Univ. of Eastern Finland)
Ajinkya Kulkarni (Idiap Research Institute, CH)
