PhD student: Development of Trustworthy AI Systems for Critical Infrastructure 100 % (m/f/d)
ZHAW Zürcher Hochschule für Angewandte Wissenschaften
Do you enjoy independent scientific research, have a strong background in machine learning and are interested in development
of Trustworthy AI? We offer a PhD position on Explainability and Fairness of AI as part of an EU-funded project.
PhD student: Development of Trustworthy AI Systems
for Critical Infrastructure 100 %
We offer a research assistant position at the ZHAW Centre for Artificial Intelligence fully funded for four years, as part of an international
and interdisciplinary research project on AI in critical infrastructures. (Contracts are renewed on a yearly basis)
This position is meant to obtain a PhD degree, awarded in collaboration with Ca'Foscari University of Venice, Italy (occasional
stays in Venice are required).
You do major contributions to scientifically challenging, applied research and innovation projects within the Computer Vision,
Perception & Cognition research group at the Centre for Artificial Intelligence (CAI), in particular, specifically the EU project
“AI4REALNET" focused on advancing the state-of-the-art on developing methodologies for addressing ethical issues in AI applications.
You develop high quality, innovative and complex algorithms and software in the area of deep learning and reinforceent learning,
in particular in human-assisted and autonomous decision making in critical infrastructure (e.g., railway network, electric
grids, air traffic control)
You contribute to the development of conceptual frameworks for operationalizing ethical foundations for the trustworthiness
and explainability of Artificial Intelligence Systems
You design and evaluate practical tools for assessing and validating AI systems trustworthiness, with specific focus on their robustness,
explainability and fairness.You investigate means for integrating these tools in existing and upcoming regulatory frameworks
You do publication and presentation of results in the form of journal publications and conference contributions.
You occasionally support lecturers in teaching (e.g. labs or exercises)
You possess a recent, very good Master's degree in computer science, data science or a related technical/scientific field, with
focus on machine learning and artificial intelligence; ideally, you have also conducted studies or obtained a second degree in
the humanities (technoethics, philosophy of technology, social studies, theology, etc.)
Previous experience in reinforcement learning and decision theory is a plus
Excellent graduates from universities of applied sciences are particularly welcome
You have strong interest and ideally already experience in addressing the ethical, and social impacts of AI systems, trustworthy
AI and explainable AI
You have strong skills in software development (ideally, Python), as well as good experience with common deep learning software
frameworks and tools (e.g., Pytorch)
You enjoy independent scientific work, as well as collaborating with interdisciplinary and international project teams; high intrinsic
motivation, creativity to find solutions for complex problems, an exact and reliable working style, as well as having good
communication skills and being adaptive and results oriented are requirements for this position
Very good language skills in English, with certified English proficiency at least B2 level
In return, we offer the possibility to combine working on scientifically rewarding and practically relevant research projects at the
young CAI with a PhD dissertation
Do you have any other questions?
For more information regarding the vacancy, please contact Thilo Stadelmann, Leiter Forschungszentrum CAI.
Phone +41 58 934 72 08, e-mail: firstname.lastname@example.org.
Are you interested?
If you would like to apply, please use the online platform to send us your portfolio, Attn. Tanja Bucher, Branding Specialist &
Recruiting Manager at email@example.com.
For further information on ZHAW or our Institute, go to: www.zhaw.ch/jobs