Abstract
This paper investigates the potential for regulatory sandboxes, a new and innovative regulatory instrument, to improve the cybersecurity posture of high-risk AI systems. Firstly, the paper introduces AI regulatory sandboxes and their relevance under both the AI Act and the GDPR. Attention is paid to the overlapping cybersecurity requirements derived from both pieces of legislation. The paper then outlines two emerging challenges of AI cybersecurity. The first, factual challenge, relates to the still under-developed state-of-the-art of AI cybersecurity, while the second legal challenge relates to the overlapping and uncoordinated cybersecurity requirements for high-risk AI systems stemming from both the AI Act and GDPR. The paper argues that AI regulatory sandboxes are well-suited to address both challenges which, in turn, will likely promote the uptake of AI regulatory sandboxes. Subsequently, it is argued that this novel legal instrument aligns well with emerging trends in the field of data protection, including Data Protection as Corporate Social Responsibility and Cybersecurity by Design. Taking stock from this ethical dimension, the many ethical risks connected with the uptake of AI regulatory sandboxes are assessed. It is finally suggested that the ethical and corporate social responsibility dimension may provide a potential solution to the many risks and pitfalls of regulatory sandboxes, although further research is needed on the topic.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 3731 |
Publication status | Published - 1 Jan 2024 |
Event | 8th Italian Conference on Cyber Security, ITASEC 2024 - Salerno, Italy Duration: 8 Apr 2024 → 12 Apr 2024 https://2024.itasec.it/ |
Keywords
- AI Act
- cybersecurity
- data protection as a corporate social responsibility
- Ethics
- GDPR
- regulatory sandboxes