Summary: 1. Introduction. – 2. Literature Review. – 3. Methodology. – 4. Criminal Liability Definition for Artificial Minds and ChatGPT. – 4.1. A Mind Without a Body: Who Prosecutes ChatGPT? – 4.2. When ChatGPT Speaks Without Criminal Intent: Can It Be Prosecuted? – 4.3. Attributing Criminal Liability in the Age of Intelligent Machines. – 5. Challenges in Assessing the Criminal Liability of Artificial Intelligence. – 5.1. The Crisis of Criminal Law in the Era of ChatGPT. – 5.2. The Causality Dilemma in the Era of Generative Artificial Intelligence. – 5.3. ChatGPT and the Mens Rea Dilemma. – 6. The Legal and Regulatory Challenges of Generative AI. – 6.1. The Algorithmic Regulatory Gap and Legal Accountability of ChatGPT. – 6.2. Algorithmic Intelligence and Legitimate Risks. – 7. The Future of Criminal Liability for Algorithmic Harm. – 7.1. Rebuilding Generative AI's Criminal Liability. – 7.2. Future Studies: Corporate Criminal Law and AI Liability. – 8. Conclusions and Recommendations.
Background: This study examines the legal challenges posed by generative AI. It highlights the limitations of traditional criminal liability frameworks in addressing harm caused by AI outputs. The research explores new models of liability to ensure accountability while protecting individual rights in the age of intelligent machines.
Generative AI, exemplified by ChatGPT, has evolved from a mere computational tool into a cognitive agent capable of content creation, problem-solving, and decision-making. This evolution challenges traditional criminal law frameworks, raising complex questions about the attribution of Liability when AI-generated outputs result in harm or criminal conduct. The study explores these dilemmas, focusing on the shortcomings of conventional concepts of criminal liability and exploring the need for new legal paradigms.
Methods: The research employs a descriptive-analytical and comparative methodology. It analyses national and international legislation, legal principles, and contemporary jurisprudence, with a focus on the European Artificial Intelligence Act (2024) as a model. The study examines AI’s autonomous capabilities, the opacity of algorithmic decision-making, and the challenges of establishing causal links between AI actions and resulting harms. Case studies are used to explore potential liability models, including preventive liability and the concept of an "artificial actor."
Results and Conclusions: The study finds that traditional frameworks of criminal accountability are inadequate for AI systems like ChatGPT, given their partial autonomy and algorithmic complexity. It highlights the potential for expanding liability to developers, operators, and users, and the necessity of flexible legal models that combine preventive, administrative, and criminal measures. The research underscores the importance of integrating legal innovation with technological oversight to safeguard individual rights while maintaining the deterrent and protective functions of criminal law.

