Join our research team as PhD candidate on the DECIDE project, which tackles one of the most urgent challenges in artificial intelligence: ensuring that AI systems are transparent, interpretable and aligned with human needs and values. You will focus on developing, testing and reviewing a methodology to make AI systems transparent and explainable. The goal is to empower citizens and professional decision-makers to make better informed decisions using AI.
Your jobAI is increasingly used in domains where decisions carry profound consequences for human lives. By focusing on transparency and explainability, this PhD project gives you the opportunity to shape how humans and AI interact in high-stakes contexts. You will help define how AI can support—not replace—human judgment, ensuring that technology empowers rather than undermines trust and autonomy.
You will be part of the DECIDE project: a large-scale, NWO-funded research initiative under the Dutch Research Agenda (NWA). It brings together 10 Dutch universities, over 50 academic researchers, and 30 societal partners to co-develop a new generation of transparent, citizen-empowering AI systems. The project spans domains such as healthcare, mobility, education, law, ethics, and public governance.
This position is a collaboration between Utrecht University and the University of Twente. You will also have the opportunity for a secondment at “The Hyve”, a company enabling Open Science. You will be based in Utrecht, working in the AI Technology for Life group. Additionally, you will collaborate closely with researchers at the Utrecht UMC on oncology, radiology and/or psychiatry use cases, involving studies and settings where high-stake decisions are made.
As a PhD candidate, you will be part of a vibrant inter- and transdisciplinary research community, collaborating across disciplines and with societal stakeholders to achieve real-world impact. You will also participate in joint training programmes on interdisciplinary and transdisciplinary research methods, citizen engagement, and ethical AI.
Responsibilities- Review existing Explainable and Transparent AI frameworks and methodology.
- Develop a framework that takes the needs of all stakeholders into account when high-stake decisions are made. Four different scenarios are used throughout the consortium, to help you develop this.
- Implement and test such a framework, in a clinical setting.
- Help teach explainable AI to bachelor’s students and master's students and/or decision makers. For example, by developing workshops and training materials.
€30000 - €36000 monthly
