• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Center for Artificial Intelligence and Cybersecurity – AIRI

  • Home
  • About Us
    • Center Activities
    • Vision, Mission and Goals
    • Center Faculty
    • Steering Committee
    • Press
  • Research
    • Scientific Projects
    • Research Papers
  • Laboratories
    • Machine Learning
    • Natural Speech & Language Processing
    • Blockchain Technology
    • Information Processing & Pattern Recognition
    • AI in Medicine
    • Data Mining
    • Computer Vision
    • Complex Networks
    • Human-Computer Interaction
    • Maritime Cybersecurity
    • Autonomous Navigation
    • AI in Mechatronics
    • AI in Education
    • Hybrid Computational Methods
    • Drug Design
    • Legal Aspects of AI
    • Ethically Aligned AI
    • Cultural Complexity
    • Trustworthy and Explainable AI
  • Collaboration
    • Industry Collaboration
    • Industry Projects
    • International Collaboration
  • News
  • Contact
  • Login

Security and Privacy of Large Language Models: Threat Taxonomy, Ethical Implications, and Governance

24.04.2026

Large Language Models (LLMs) are increasingly deployed across professional and societal domains, introducing security, privacy, and governance challenges beyond traditional software vulnerabilities. Despite extensive research on individual risk categories, a unified lifecycle-oriented perspective connecting architectural properties, adversarial threats, and governance implications remains limited. This review examines security and privacy risks associated with LLMs through a lifecycle framework covering data acquisition, model training, alignment procedures, deployment, and post-deployment interaction. The study synthesizes prior research to construct a taxonomy of threats including prompt injection, jailbreaking, adversarial manipulation, training-stage attacks, privacy leakage, and socio-technical misuse. Ethical issues such as hallucination, bias amplification, and malicious use are analyzed alongside governance and regulatory frameworks. Results indicate that vulnerabilities in LLM systems arise primarily from probabilistic generation mechanisms, large-scale data ingestion, and complex deployment ecosystems rather than isolated implementation defects. Classical software vulnerability models therefore provide only partial coverage of risks associated with generative AI systems. The review is grounded in the concept of the alignment gap to explain how discrepancies between training objectives and real-world interaction contribute to persistent vulnerabilities. The findings highlight the need for lifecycle-oriented defense-in-depth strategies combining technical safeguards, privacy-preserving training, runtime monitoring, and governance mechanisms to support responsible deployment of LLM-based systems.

Authors:
Marko Pribisalić, Sanda Martinčić-Ipšić
Journal:
AI
Publishing date:
24.04.2026
View original article

Primary Sidebar

Latest Projects

Knowledge Graphs in the Era of Large Language Models (KGELL)

LIVE Quantum – Development of an Integrated AI Platform for Multichannel Personalized Management of User Requests

Predicting Anomalous Trajectories Using Machine Learning

Adaptive Algorithms for Integrating Compressive Sensing with Deep Learning

Deep Learning for Smart Energy Systems Management

Latest Research Papers

Security and Privacy of Large Language Models: Threat Taxonomy, Ethical Implications, and Governance

Deep Unfolding ADMM Network for CS Image Reconstruction with Long-Short Term Residuals

XDT-FMARL: An Explainable Federated Multi-Agent Reinforcement Learning Framework for Energy-Efficient IoT Task Offloading

Proactive Context Aware Task Offloading in Digital Twin Driven Federated IoT Systems with Large Language Models

Pretraining and evaluation of BERT models for climate research

Latest News

Knowledge Graphs in the Era of Large Language Models (KGELL)

LIVE Quantum – Development of an Integrated AI Platform for Multichannel Personalized Management of User Requests

LIVE Quantum project kick-off meeting

Pretraining and evaluation of BERT models for climate research

Invited lecture: “About the first GPS receiver on the Moon, and the other NASA space PNT stories” by James J. Miller (NASA)

We provide the expertise for solving real world problems using AI

If your company wants to implement artificial intelligence in your products or services, or increase your level of cybersecurity, our multidisciplinary team of scientists is your ideal partner.

Contact us

Footer

Center for Artificial Intelligence and Cybersecurity
  • jlerga@airi.uniri.hr
  • +385 51 406 500

University of Rijeka

University of Rijeka

About the Center

  • About Us
  • News
  • Privacy Policy
  • Contact

Center Activities

  • Laboratories
  • Scientific Projects
  • Industry Projects
  • Research Papers
  • Industry Collaboration
  • International Collaboration

Footer bottom left

© 2020 Center for Artificial Intelligence and Cybersecurity, all rights reserved.

Designed & developed by Nela Dunato Art & Design