• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Center for Artificial Intelligence and Cybersecurity – AIRI

  • Home
  • About Us
    • Center Activities
    • Vision, Mission and Goals
    • Center Faculty
    • Steering Committee
    • Press
  • Research
    • Scientific Projects
    • Research Papers
  • Laboratories
    • Machine Learning
    • Natural Speech & Language Processing
    • Blockchain Technology
    • Information Processing & Pattern Recognition
    • AI in Medicine
    • Data Mining
    • Computer Vision
    • Complex Networks
    • Human-Computer Interaction
    • Maritime Cybersecurity
    • Autonomous Navigation
    • AI in Mechatronics
    • AI in Education
    • Hybrid Computational Methods
    • Drug Design
    • Legal Aspects of AI
    • Ethically Aligned AI
    • Cultural Complexity
    • Trustworthy and Explainable AI
  • Collaboration
    • Industry Collaboration
    • Industry Projects
    • International Collaboration
  • News
  • Contact
  • Login

Trustworthy and Explainable AI Lab

The Trustworthy and Explainable AI Lab is an interdisciplinary research initiative dedicated to exploring and advancing the development of AI systems that are both ethically sound and legally accountable. Focusing on the integration of AI with legal frameworks, the Lab explores methods for ensuring transparency, interpretability and fairness in AI decision-making processes. Its research addresses key concerns related to the trustworthiness of AI technologies, including the mitigation of bias, the legal implications of algorithmic decisions, and the creation of explainable models that facilitate compliance with regulatory standards. The Lab’s work aims to bridge the gap between AI innovation and legal governance, fostering the responsible deployment of AI systems in society.

The Trustworthy and Explainable AI Lab is affiliated with the Z-inspection® initiative.

Z-Inspection® is a holistic process for evaluating the trustworthiness of AI-based technologies at different stages of the AI lifecycle. In particular, it focuses on identifying and discussing ethical issues and tensions through the development of socio-technical scenarios.

The process has been published in the IEEE Transactions on Technology and Society.

Z-Inspection® is distributed under the terms of the Creative Commons License (Attribution-NonCommercial-ShareAlike CC BY-NC-SA).

Z-Inspection® is listed in the new OECD Catalogue of AI Tools & Metrics.  

Head of Laboratory

Ivana Kunda, Prof., PhD

Laboratory Projects

Data Governance and Intellectual Property Governance in Common European Data Spaces – DGIP-CEDS

Laboratory Research Papers

Pravna tehnologija (Legal Tech) i njezina (ne)prikladnost za zamjenu pravne struke

Artificial Intelligence as a Challenge for European Patent Law

Affiliated Researchers

  • Adna Škamo
  • Ivana Kunda, Prof., PhD
  • Jasmina Mutabžija, PhD
  • Philipp Homar, Prof., PhD
  • Pavel Koukal, Prof., PhD
  • Richard Rak, PhD

Footer

Center for Artificial Intelligence and Cybersecurity
  • jlerga@airi.uniri.hr
  • +385 51 406 500

University of Rijeka

University of Rijeka

About the Center

  • About Us
  • News
  • Privacy Policy
  • Contact

Center Activities

  • Laboratories
  • Scientific Projects
  • Industry Projects
  • Research Papers
  • Industry Collaboration
  • International Collaboration

Footer bottom left

© 2020 Center for Artificial Intelligence and Cybersecurity, all rights reserved.

Designed & developed by Nela Dunato Art & Design