The Trustworthy and Explainable AI Lab is an interdisciplinary research initiative dedicated to exploring and advancing the development of AI systems that are both ethically sound and legally accountable. Focusing on the integration of AI with legal frameworks, the Lab explores methods for ensuring transparency, interpretability and fairness in AI decision-making processes. Its research addresses key concerns related to the trustworthiness of AI technologies, including the mitigation of bias, the legal implications of algorithmic decisions, and the creation of explainable models that facilitate compliance with regulatory standards. The Lab’s work aims to bridge the gap between AI innovation and legal governance, fostering the responsible deployment of AI systems in society.
The Trustworthy and Explainable AI Lab is affiliated with the Z-inspection® initiative.
Z-Inspection® is a holistic process for evaluating the trustworthiness of AI-based technologies at different stages of the AI lifecycle. In particular, it focuses on identifying and discussing ethical issues and tensions through the development of socio-technical scenarios.
The process has been published in the IEEE Transactions on Technology and Society.
Z-Inspection® is distributed under the terms of the Creative Commons License (Attribution-NonCommercial-ShareAlike CC BY-NC-SA).
Z-Inspection® is listed in the new OECD Catalogue of AI Tools & Metrics.