TRUST-AD

June 22nd 2025, Cluj-Napoca, Romania


Home

Welcome to the TRUST-AD IV Workshop 2025!

Modern AI techniques have propelled significant advancements in autonomous driving (AD), enabling systems to navigate complex environments with remarkable performance. However, these achievements are overshadowed by a critical limitation: the reliance on black-box models, which are opaque in their decision-making processes and challenging to validate. This lack of transparency hampers deployment, regulatory validation, and public trust, particularly in safetycritical applications. To address these challenges, there is an urgent need to integrate explainable AI, robust validation strategies, and generative data techniques into the development of AD systems.
This workshop brings together cutting edge approaches to transform black-box models into interpretable, trustworthy systems and tries to answer the following questions:
  • How to reach safety and trustworthiness in AD using synthetic data and simulation?
  • How can (generative) AI enhance explainability and interpretability?
  • What is required to validate the generated data and embodied systems?

This workshop is partially based on the SAFE-DRIVE workshop.

Contact: trust.ad.workshop@gmail.com

Call For Papers

The following three topic clusters are in focus of TRUST-AD:
 •  Synthetic Data and Simulation: Modeling and generation of data-driven traffic simulations, realistic object behavior, edge case scenarios and naturalistic driving environments; bridging the simulation-reality gap;
 •  Explainable AI for Trustworthy and Transparent Systems: Foundation models for safe autonomy; reasoning and guidance with natural language; object-centric world model representation; human-AV interaction in mixed traffic; behavior prediction and planning and its relevance for AV safety; transferable and continuous driver behavior modeling; explainable End-to-End models;
 •  Validation Frameworks and Regulatory Alignment: quality of generated data; simulation framework for mixedtraffic validation; adversarial testing and validation frameworks; alignment with regulatory requirements and public trust;

Keywords

Autonomous Driving, End-End Models, Generative AI, Explainable AI, AD Safety, Synthetic Data, Simulation, Trustworthy Systems, Validation Frameworks, Regulatory Alignment

Keynote Speakers

Schedule

The workshop will take place on June 22nd, 2025. Preliminary schedule:

Time Speaker / Event & Title
08:30Welcome
08:50Synthetic Data and Simulation
08:50 Ezio Malis (Inria Rennes) Hybrid AI: Integration of Rule-Driven and Data-Driven Approaches for Safer Autonomous Driving
09:25 Gabriel Campos (Zenseact) Scaling Multimodal Perception and Simulation for AD
10:00Wrap up
10:10Spotlights
10:10 Ahmed Abouelazm et al. Automatic Curriculum Learning for Driving Scenarios: Towards Robust and Efficient Reinforcement Learning
10:30Coffee break
11:00Explainable AI for Trustworthy and Transparent Systems
11:00 Amir Rasouli (Noah's Ark Lab) Learning the right things for robust and interpretable driving
11:35 Ehsan Malayjerdi Accelerating AV V&V: A SIL-Based Automated Platform
12:10Wrap up
12:20Lunch
13:30Validation Frameworks and Regulatory Alignment
13:30 Laura Ruotsalainen (University of Helsinki) Spatiotemporal Machine Learning for Safe, Equitable, and Environmentally Sustainable Autonomous Driving
14:05 Abhinav Valada (Robot Learning Lab) Rethinking the Foundations of Autonomous Mobility
14:40Wrap up
14:50Spotlights
14:50 Frederik Werner et al. A Quasi-Steady-State Black Box Simulation Approach for the Generation of G-G-G-V Diagrams
15:10 Martin Aleksandrov Safer and Trustworthier Navigation of Automated Vehicles
15:30Coffee break
16:00Safe Deployment of AD
16:00 Per Nordqvist (Outrider) Simulation for AD - Escape from reality
16:35 Reza Azad (Loxo) End-to-End Learning for Autonomous Driving: Opportunities and Real-World Challenges
17:10Wrap up
17:20Panel discussion
18:10Closing ceremony

Submissions

Workshop Code for submission: TRUST-AD
Workshop Paper Submission deadline: 1st February 2025 at 23:59 Anywhere on Earth
Workshop Paper Notification of Acceptance: March 30, 2025
Workshop Final Paper Submission Deadline: April 25, 2025
Paper submission here.

FAQs

Q: Will there be archival proceedings?
A: Yes, All submitted manuscripts will be peer-reviewed and will be published in IEEE proceedings.

Q: Should submitted papers be anonymized or without authors names?
A: No.

Q: My paper contains ABC, but not XYZ, is this good enough for a submission?
A: Submissions will be peer-reviewed based on the information provided here.


Q: My question is not listed here. How can I contact you?
A: Please reach out to us at the email address trust.ad.workshop@gmail.com and we'll be happy to answer any questions you have!


Q: Will the deadline for paper submission be extented?
A: We are afraid that all deadline are firm and will not be extended.

Q: What is the unique code for this workshop for submission?
A: The unique code for submitting the paper for this workshop is "TRUST-AD". More Information here.


Keynote Talks



Hybrid AI: Integration of Rule-Driven and Data-Driven Approaches for Safer Autonomous Driving

Ezio Malis (Inria Rennes)

Abstract TBA



Scaling Multimodal Perception and Simulation for AD

Gabriel Campos (Zenseact)

Autonomous vehicles rely on multimodal perception to tackle the complexity and variability of real-world environments. Multimodality is essential for reaching state-of-the-art performance in automotive perception. In this talk, I will explore the challenges of scaling multimodal simulation, focusing on how synthetic data and advanced scene representations, such as NeRF- and Gaussian Splatting-based approaches, can accelerate the development of robust simulation solutions. I will also discuss strategies to bridge the real-to-sim gap, enabling the deployment of neural-based closed-loop simulation and virtual testing in automotive pipelines.



Learning the right things for robust and interpretable driving

Amir Rasouli (Noah's Ark Lab)

Interpretability is crucial for evaluation, understanding, and trusting the performance of intelligent driving systems. What these systems learn and how they learn play an important role in making sense of their behavior. On one hand, the system can be explained by directly linking its performance to various elements of the task. One the other hand, the users' knowledge of the learning process can shape their expectations, and consequently, enhance their ability to interpret the systems' behavior. This talk will review works on context understanding and representation learning and how both can contribute to explainability of intelligent driving systems.



Rethinking the Foundations of Autonomous Mobility

Abhinav Valada (Robot Learning Lab)

In this talk, I will present our efforts toward learning open-world robot autonomy, where reliability and robustness plays a crucial role. I will discuss our recent advancements in leveraging foundation models and continual online learning to commonsense reasoning through language and vision.



Spatiotemporal Machine Learning for Safe, Equitable, and Environmentally Sustainable Autonomous Driving

Laura Ruotsalainen (University of Helsinki)

We present machine learning approaches that advance the safety, equity, and sustainability of autonomous driving in complex real-world settings. Central to our work is a multi-objective hierarchical reinforcement learning framework (MOHRL-ci) designed for urban-scale optimization of traffic flow, air quality, and livability. We also apply multi-objective reinforcement learning to infrastructure-level planning tasks, such as electric vehicle charging station placement and autonomous car sharing, both aimed at reducing CO₂ emissions and promoting low-carbon mobility systems. Complementing this, we develop uncertainty-aware transformer models to mitigate GNSS interference, enhancing navigation reliability under adverse conditions, a safety-critical component for autonomous vehicles. Together, these contributions provide a cohesive foundation for autonomous driving systems that are not only technically capable, but also socially and environmentally aligned.



Accelerating AV V&V: A SIL-Based Automated Platform

Ehsan Malayjerdi (Volvo Autonomous Solutions)

SIL-Based Automation: Streamlining AV V&V through automated Software-in-the-Loop testing
Scalable and Cost-Efficient: Enabling efficient testing of complex AV systems at reduced costs
Safety and Compliance: Ensuring AV safety and regulatory compliance through rigorous testing
Data-Driven Insights: Leveraging test data for continuous improvement and informed decision-making



Simulation for AD - Escape from reality

Per Nordqvist (Outrider)

Abstract TBA



End-to-End Learning for Autonomous Driving: Opportunities and Real-World Challenges

Reza Azad (Loxo)

End-to-end learning is redefining the landscape of autonomous driving by enabling systems to learn complex driving behaviors directly from sensor data using deep neural networks. This approach removes the need for traditional modular pipelines—such as perception, prediction, and planning—offering a more unified and potentially more scalable solution.

Yet, transitioning from research prototypes to real-world autonomous systems reveals critical challenges, particularly in data. High-quality, diverse, and representative data is essential to ensure robust model performance. However, data collection at scale is expensive and often biased toward common scenarios, leaving rare and safety-critical edge cases underrepresented. Such biases can lead to network overfitting, poor generalization, and unpredictable behavior in novel environments.

This keynote addresses the core data-centric challenges in end-to-end autonomous driving and explores emerging strategies to overcome them. Topics include advanced data augmentation techniques to simulate rare scenarios, sensor shift strategies to enhance domain generalization, and the integration of world models that incorporate explicit safety constraints into learning and decision-making. These methods aim to improve robustness, interpretability, and trustworthiness of end-to-end systems.

Drawing from both academic research and operational deployments, the talk outlines a path forward for developing safe, scalable, and data-efficient end-to-end autonomous driving technologies.

Accepted Papers



Automatic Curriculum Learning for Driving Scenarios: Towards Robust and Efficient Reinforcement Learning

Ahmed Abouelazm, Tim Weinstein, Tim Joseph, Philip Schörner, Marius Zöllner

This paper addresses the challenges of training end-to-end autonomous driving agents using Reinforcement Learning (RL). RL agents are typically trained in a fixed set of scenarios and nominal behavior of surrounding road users in simulations, limiting their generalization and real-life deployment. While Domain Randomization offers a potential solution by randomly sampling driving scenarios, it frequently results in inefficient training and sub-optimal policies due to the high variance among training scenarios. To address these limitations, we propose an automatic curriculum learning framework that dynamically generates driving scenarios with adaptive complexity based on the agent’s evolving capabilities. Unlike manually designed curricula that introduce expert bias and lack scalability, our framework incorporates a ”teacher” that automatically generates and mutates driving scenarios based on their learning potential—an agent-centric metric derived from the agent’s current policy, eliminating the need for expert design. The framework enhances training efficiency by excluding scenarios the agent has mastered or finds too challenging. We evaluate our framework in a reinforcement learning setting where the agent learns a driving policy from camera images. Comparative results against baseline methods, including fixed scenario training and domain randomization, demonstrate that our approach leads to enhanced generaliza- tion, achieving higher success rates, +9% in low traffic density, +21% in high traffic density, and faster convergence with fewer training steps. Our findings highlight the potential of ACL in improving the robustness and efficiency of RL-based autonomous driving agents



A Quasi-Steady-State Black Box Simulation Approach for the Generation of G-G-G-V Diagrams

Frederik Werner, Simon Sagmeister, Mattia Piccinini, Johannes Betz

The classical g-g diagram, representing the achievable acceleration space for a vehicle, is commonly used as a constraint in trajectory planning and control due to its computational simplicity. To address non-planar road geometries, this concept can be extended to incorporate g-g constraints as a function of vehicle speed and vertical acceleration, commonly referred to as g-g-g-v diagrams. However, the estimation of g-g-g-v diagrams is an open problem. Existing simulation-based approaches struggle to isolate non-transient, open-loop stable states across all combinations of speed and acceleration, while optimization-based methods often require simplified vehicle equations and have potential convergence issues. In this paper, we present a novel, open-source, quasi-steady-state black box simulation approach that applies a virtual inertial force in the longitudinal direction. The method emulates the load conditions associated with a specified longitudinal acceleration while maintaining constant vehicle speed, enabling open-loop steering ramps in a purely QSS manner. Appropriate regulation of the ramp steer rate inherently mitigates transient vehicle dynamics when determining the maximum feasible lateral acceleration. Moreover, treating the vehicle model as a black box eliminates model mismatch issues, allowing the use of high-fidelity or proprietary vehicle dynamics models typically unsuited for optimization approaches. An open-source version of the proposed method is available at: https://github.com/TUM-AVS/GGGVDiagrams



Safer and Trustworthier Navigation of Automated Vehicles

Martin Aleksandrov

We propose a novel approach for the safer and trustworthier navigation of automated vehicles, which uses risk estimations for predicting vehicle trajectories and generates navigation instructions for these trajectories. To increase the reliability of instructions, we first use the latest YOLO11 model and adapt images from the KITTI dataset to include rain, fog, and evening effects, such as low lighting and high darkness, and, then, we train the model on the modified images to make it more robust to scene changes due to such weather and evening conditions. For this more robust model, we give functions that assign risk estimations to images taken from cameras associated with nodes in road networks. Based on these risk estimations, we give algorithms for the dynamic navigation of automated vehicles along network trajectories of low risk. In cases where risk estimations at network nodes and in network paths are too high, we give methods that produce human-interpretable explanations of these risk values and recommend driving instructions. Overall, our research generally facilitates the road integration and social adoption of automated driving.

Organizers

  • shoaib Azam

    Shoaib Azam

    is a Postdoctoral Scholar at Aalto University & Finnish center for Artifical Intelligence, Finland.

  • Beatriz Cabrero-Daniel

    Beatriz Cabrero-Daniel

    is a postdoctoral researcher in the University of Gothenburg and Chalmers Institute of Technology focusing on virtual toolchains for verification and validation of autonomous driving functions.

  • Tsvetomila Mihaylova

    Tsvetomila Mihaylova

    is a Postdoctoral Scholar at Aalto University, Finland.

  • Stefan Reitmann

    Stefan Reitmann

    is a research fellow at Lund University, Sweden.

  • Yuchen Liu

    Yuchen Liu

    is a PhD student at the Chair of Ergonomics at the Technical University of Munich, focusing on socio-technical interactions between pedestrians and automated vehicles.

  • Markus Enzweiler

    Markus Enzweiler

    is a Professor of Autonomous Mobile Systems and a Director of the Institute for Intelligent Systems in Esslingen University of Applied Sciences, Germany

  • Katharina Winter

    Katharina Winter

    is a Research Assistant at the Intelligent Vehicles Lab and a PhD candidate at Munich University of Applied Sciences. Her research focuses on trajectory planning with LLMs.

  • Fabian Schmidt

    Fabian Schmidt

    is a Research Assistant at the Institute for Intelligent Systems at Esslingen University and a PhD candidate at the University of Freiburg, focusing on visual localization and LLM-based decision-making for autonomous systems.

  • Fabian Flohr

    Fabian Flohr

    is a full professor of machine learning at the Munich University of Applied Sciences where he is leading the Intelligent Vehicles Lab.


  • Farzeen Munir

    Farzeen Munir

    is a Postdoctoral Scholar at Aalto University & Finnish center for Artifical Intelligence, Finland.

  • Julian Kooij

    Julian Kooij

    is an Associate Professor in the Intelligent Vehicles group, part of the Cognitive Robotics department of TU Delft, The Netherlands.

  • Mazen Mohamad

    Mazen Mohamad

    is a researcher in the research institute of Sweden, Sweden’s research institute and innovation partner.

  • Mohan Ramesh

    Mohan Ramesh

    is a Research Assistant at the Intelligent Vehicles Lab and a PhD candidate at Munich University of Applied Sciences. His research focuses on synthesising human motion and behaviour for autonomous driving applications.

  • Erik Schütz

    Erik Schütz

    is a Research Assistant at the Intelligent Vehicles Lab and a PhD candidate at Munich University of Applied Sciences. His research focuses on trajectory prediction for vulnerable road users.

  • Peter Pinggera

    Peter Pinggera

    is a System Architect at Zenseact, where he is working on bringing safe ADAS and AD systems into production.

Partners

    Part of the team of workshop organizers are members of German publicly funded projects.