Tuning the Mind of AI.
Trustworthy, Transparent,
Safe and Aligned AI.

Explainable AI
Safety
AI model alignment

Department: Artificial Intelligence
Available slots: 1 of 2 startups

The rise of large-scale AI models has unlocked immense potential, but also serious concerns around trust, safety, and control. As these models are deployed in high-stakes environments, the lack of transparency in how they make decisions poses practical and regulatory risks, especially as new laws like the EU AI Act raise the bar. Most tools focus only on outputs, offering limited insight into the internal reasoning behind model behaviour.

Startups building AI systems that need to be explainable, secure, or regulation-ready require visibility into how their models behave and tools to adjust that behaviour when needed. Without this, they risk unreliable performance, failed audits, and loss of user trust.

Our research at Fraunhofer HHI offers new ways to understand and control model behaviour. It reveals how models make decisions, identifies patterns like shortcut learning and bias, and enables targeted interventions at the mechanism level. These methods apply across vision, language, and time-series models and are protected by 7 patents.

This is a rare opportunity to build on IP from world-leading researchers in explainable AI and collaborate directly with the team that pioneered it. Startups can gain technical and strategic leverage, accelerate their development, increase trust and compliance readiness, and create hard-to-copy products.

Your unfair advantage

  • Access to leading xAI researchers, including Sebastian Lapuschkin and Wojciech Samek
  • Use of 7 patented methods for attribution, model correction, and concept-level analysis
  • Compute resources via Fraunhofer HHI’s dedicated cluster
  • Technical guidance from our applied AI teams
  • Plug into the Fraunhofer network of technical experts, partners, and applied research initiatives
This is for:

Founders building applications where decisions must be explainable, auditable, and robust in sectors like healthcare, finance, and industrial automation.

Startup teams with strong technical backgrounds, a clear use case, and access to relevant training data are encouraged.
This is NOT for:

Products where explainability is a nice-to-have feature but not core to the technical innovation. Datasets without a clear ground truth (subjective results).
Replay Video
Play_IconPause Video
GET IN TOUCH
GET IN TOUCH
Close
Thank you! Your message has been received! We'll get in touch with you soon.
Oops! Something went wrong while submitting the form.
Apply now