Home
Welcome to the TRUST-AD IV Workshop 2025!
Modern AI techniques have propelled significant advancements in autonomous driving (AD), enabling systems to navigate complex environments with remarkable performance. However, these achievements are overshadowed by a critical limitation: the reliance on black-box models, which are opaque in their decision-making processes and challenging to validate. This lack of transparency hampers deployment, regulatory validation, and public trust, particularly in safetycritical applications. To address these challenges, there is an urgent need to integrate explainable AI, robust validation strategies, and generative data techniques into the development of AD systems.
This workshop brings together cutting edge approaches to transform black-box models into interpretable, trustworthy systems and tries to answer the following questions:
• How to reach safety and trustworthiness in AD using synthetic data and simulation?
• How can (generative) AI enhance explainability and interpretability?
• What is required to validate the generated data and embodied systems?
This workshop is partially based on the SAFE-DRIVE workshop.
Contact: trust.ad.workshop@gmail.com
Call For Papers
The following three topic clusters are in focus of TRUST-AD:
• Synthetic Data and Simulation:
Modeling and generation of data-driven traffic simulations,
realistic object behavior,
edge case scenarios and naturalistic driving environments;
bridging the simulation-reality gap;
• Explainable AI for Trustworthy and Transparent Systems:
Foundation models for safe autonomy;
reasoning and guidance with natural language;
object-centric world model representation;
human-AV interaction in mixed traffic;
behavior prediction and planning and its relevance for AV safety;
transferable and continuous driver behavior modeling;
explainable End-to-End models;
• Validation Frameworks and Regulatory Alignment:
quality of generated data;
simulation framework for mixedtraffic validation;
adversarial testing and validation frameworks;
alignment with regulatory requirements and public trust;
Keywords
Autonomous Driving, End-End Models, Generative AI, Explainable AI, AD Safety, Synthetic Data, Simulation, Trustworthy Systems, Validation Frameworks, Regulatory Alignment
Keynote Speakers
TBA
Schedule
The workshop will take place on June 22nd, 2025. Preliminary schedule:
Time | Event | Details |
---|---|---|
08:30 | Welcome | 10 min |
08:40 | Synthetic Data and Simulation | 2 keynotes |
09:40 | Spotlights | 2 x 10 min |
10:00 | Coffee Break | - |
10:00 | Poster Sessions | 60 min |
11:00 | Explainable AI for Trustworthy and Transparent Systems | 2 keynotes, 60 min |
12:00 | Lunch Break | 60 min |
13:00 | Spotlights | 3 x 10 min |
13:30 | Validation Frameworks and Regulatory Alignment | 2 keynotes, 60 min |
14:30 | Spotlights | 3 x 10 min |
15:00 | Coffee Break | 30 min |
15:30 | Safe Deployment of AD | 1 keynote, 30 min |
16:00 | Panel Discussion | Speakers, 60 min |
17:00 | Closing | 10 min |
Submissions
Workshop Code for submission: TRUST-AD
Workshop Paper Submission deadline: 1st February 2025 at 23:59 Anywhere on Earth
Workshop Paper Notification of Acceptance: March 30, 2025
Workshop Final Paper Submission Deadline: April 25, 2025
Paper submission here.
FAQs
Q: Will there be archival proceedings?
A: Yes, All submitted manuscripts will be peer-reviewed and will be published in IEEE proceedings.
Q: Should submitted papers be anonymized or without authors names?
A: No.
Q: My paper contains ABC, but not XYZ, is this good enough for a submission?
A: Submissions will be peer-reviewed based on the information provided here.
Q: My question is not listed here. How can I contact you?
A: Please reach out to us at the email address trust.ad.workshop@gmail.com
and we'll be happy to answer any questions you have!
Q: Will the deadline for paper submission be extented?
A: We are afraid that all deadline are firm and will not be extended.
Q: What is the unique code for this workshop for submission?
A: The unique code for submitting the paper for this workshop is "TRUST-AD". More Information here.
Keynote Talks
TBA
Organizers
-
Shoaib Azam
is a Postdoctoral Scholar at Aalto University & Finnish center for Artifical Intelligence, Finland.
-
Beatriz Cabrero-Daniel
is a postdoctoral researcher in the University of Gothenburg and Chalmers Institute of Technology focusing on virtual toolchains for verification and validation of autonomous driving functions.
-
Tsvetomila Mihaylova
is a Postdoctoral Scholar at Aalto University, Finland.
-
Stefan Reitmann
is a research fellow at Lund University, Sweden.
-
Yuchen Liu
is a PhD student at the Chair of Ergonomics at the Technical University of Munich, focusing on socio-technical interactions between pedestrians and automated vehicles.
-
Markus Enzweiler
is a Professor of Autonomous Mobile Systems and a Director of the Institute for Intelligent Systems in Esslingen University of Applied Sciences, Germany
-
Katharina Winter
is a Research Assistant at the Intelligent Vehicles Lab and a PhD candidate at Munich University of Applied Sciences. Her research focuses on trajectory planning with LLMs.
-
Fabian Schmidt
is a Research Assistant at the Institute for Intelligent Systems at Esslingen University and a PhD candidate at the University of Freiburg, focusing on visual localization and LLM-based decision-making for autonomous systems.
-
Fabian Flohr
is a full professor of machine learning at the Munich University of Applied Sciences where he is leading the Intelligent Vehicles Lab.
-
Farzeen Munir
is a Postdoctoral Scholar at Aalto University & Finnish center for Artifical Intelligence, Finland.
-
Julian Kooij
is an Associate Professor in the Intelligent Vehicles group, part of the Cognitive Robotics department of TU Delft, The Netherlands.
-
Mazen Mohamad
is a researcher in the research institute of Sweden, Sweden’s research institute and innovation partner.
-
Mohan Ramesh
is a Research Assistant at the Intelligent Vehicles Lab and a PhD candidate at Munich University of Applied Sciences. His research focuses on synthesising human motion and behaviour for autonomous driving applications.
-
Erik Schütz
is a Research Assistant at the Intelligent Vehicles Lab and a PhD candidate at Munich University of Applied Sciences. His research focuses on trajectory prediction for vulnerable road users.
-
Peter Pinggera
is a System Architect at Zenseact, where he is working on bringing safe ADAS and AD systems into production.