
Department: Artificial Intelligence
Available slots: 1 of 2 startups
The rise of large-scale AI models has unlocked immense potential, but also serious concerns around trust, safety, and control. As these models are deployed in high-stakes environments, the lack of transparency in how they make decisions poses practical and regulatory risks, especially as new laws like the EU AI Act raise the bar. Most tools focus only on outputs, offering limited insight into the internal reasoning behind model behaviour.
Startups building AI systems that need to be explainable, secure, or regulation-ready require visibility into how their models behave and tools to adjust that behaviour when needed. Without this, they risk unreliable performance, failed audits, and loss of user trust.
Our research at Fraunhofer HHI offers new ways to understand and control model behaviour. It reveals how models make decisions, identifies patterns like shortcut learning and bias, and enables targeted interventions at the mechanism level. These methods apply across vision, language, and time-series models and are protected by 7 patents.
This is a rare opportunity to build on IP from world-leading researchers in explainable AI and collaborate directly with the team that pioneered it. Startups can gain technical and strategic leverage, accelerate their development, increase trust and compliance readiness, and create hard-to-copy products.