Designing connected and automated vehicles around legal and ethical concerns: Data protection as a corporate social responsibility

Paolo Balboni, Kate Francis, Anastasia Botsi, Martim Taborda Barata

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingProfessional

Abstract

Emerging technologies and tools based on Artificial Intelligence (AI), such as Connected and automated vehicles (CAVs), present novel regulatory and legal compliance challenges while at the same time raising important questions with respect to ethics and transparency. On the one hand, CAVs bring to light theoretical and practical challenges to the implementation of the multi-dimensional obligations of the current European personal data protection legal framework, including the General Data Protection Regulation (GDPR), the ePrivacy Directive, 1 and where applicable, the Directive for a high common level of security and information systems (NIS Directive or NISD). 2 As mere examples, CAV developers currently face multiple legal hurdles to overcome, including the necessity to fulfil controller and/or processor obligations in complex data processing scenarios 3 and tensions with the GDPR's principle of purpose limitation 4 (which comes at odds with the autonomous processing of personal data through AI in the CAV, which may be based on a (re)interpretation of goals, or, possibly, a shift in focus from the original goal for which personal data was collected). Additionally, the overall need for relatively large datasets to properly train and leverage AI functionalities leads to conflicts with the principle of data minimization. 5 When applied to AI systems, the requirement of data protection by design and by default also presents difficulties, as data protection by default is possible only when the necessary personal data is processed for a specific purpose. 6 Moreover, the ePrivacy Directive has been interpreted by European Supervisory Authorities - notably, the European Data Protection Board (EDPB) 7 - as requiring a company wishing to store or access information stored within a CAV to obtain specific consent from CAV users for these specific activities. Furthermore, an additional legal basis must be determined (possibly necessitating those companies to make a double request for consent) for any subsequent use of the information stored or accessed, such as the analysis of telematics data collected from a CAV. This interpretation creates challenges at the technical and legal levels in particular where the legal basis defined for subsequent use of CAV information is not consent, such as in the case of pay-as-you-drive insurance, where the contract entered into between the CAV user and an insurance company serves as a legal basis for the processing of their personal data. A conflict between the legal basis used for information storage/access - consent, which must be freely withdrawable under the GDPR 8 - and the legal basis used for information use - e.g., performance of a contract, which will typically not be compatible with the possibility for the CAV user to freely prevent the insurance company from continuing to process their personal data emerges in this context. Concerns from the data security 9 perspective are also highly relevant, notably due to the lack of shared security standards in the CAV domain and the increase of potential attack surface caused by the interconnection of different CAV components. 10 On the other hand, while European data protection legislation such as the GDPR, ePrivacy and NISD provide a minimum level of legal safeguards for citizens, they may not suffice to maximize CAV benefits for users while minimizing their potential negative impact on society. 11 In order to properly and comprehensively address the risks brought about by CAVs, ethics 12 and human rights concerns must therefore take a central role in every stage of the CAV development lifecycle, embedding the notions of fairness, transparency, and security into design processes. Transparency 13 is situated between the legal and ethical dimensions and is challenged by the complexity of AI systems, as well as the inherent autonomy and flexibility of automated decision-making, and is key in the development of the framework as a prerequisite for trustworthy, ethical, and fair data processing. This paper explores the closely linked legal principles and ethical aspects that should be taken into consideration by stakeholders in the CAV landscape and provides a roadmap to be used by CAV researchers, developers, and all those who seek to create and implement technologies to carry out data processing activities within such domain in a compliant, fair and trustworthy manner. As a result of the inherent link between the legal and ethical concerns, the authors will present a holistic approach to design and development which is intended to overcome the challenges posed to European personal data protection legal principles and obligations, by involving ethics and fairness. This approach, which goes beyond minimum legal requirements and proposes the application of a multidisciplinary framework, can be defined as Data Protection as a Corporate Social Responsibility in accordance to the Maastricht methodology in this domain.

Original languageEnglish
Title of host publicationWorkshops of the 11th EETN Conference on Artificial Intelligence 2020 (SETN2020 Workshops)
EditorsGeorge Giannakopoulos, Eleni Galiotou, Nikolaos Vasillas
Pages139-151
Number of pages13
Volume2844
Publication statusPublished - 3 Sept 2020
EventWorkshops of the 11th EETN Conference on Artificial Intelligence - Athens, Greece
Duration: 2 Sept 20204 Sept 2020

Publication series

SeriesCEUR Workshop Proceedings
Volume2844
ISSN1613-0073

Workshop

WorkshopWorkshops of the 11th EETN Conference on Artificial Intelligence
Abbreviated titleSETN 2020 Workshops
Country/TerritoryGreece
CityAthens
Period2/09/204/09/20

Keywords

  • Artificial intelligence
  • Automated vehicles
  • Connected
  • Corporate social responsibility
  • Data protection

Cite this