TY - GEN
T1 - Designing connected and automated vehicles around legal and ethical concerns
T2 - Workshops of the 11th EETN Conference on Artificial Intelligence
AU - Balboni, Paolo
AU - Francis, Kate
AU - Botsi, Anastasia
AU - Barata, Martim Taborda
N1 - Publisher Copyright:
Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
PY - 2020/9/3
Y1 - 2020/9/3
N2 - Emerging technologies and tools based on Artificial Intelligence (AI), such as Connected and automated vehicles (CAVs), present novel regulatory and legal compliance challenges while at the same time raising important questions with respect to ethics and transparency. On the one hand, CAVs bring to light theoretical and practical challenges to the implementation of the multi-dimensional obligations of the current European personal data protection legal framework, including the General Data Protection Regulation (GDPR), the ePrivacy Directive,
1 and where applicable, the Directive for a high common level of security and information systems (NIS Directive or NISD).
2 As mere examples, CAV developers currently face multiple legal hurdles to overcome, including the necessity to fulfil controller and/or processor obligations in complex data processing scenarios
3 and tensions with the GDPR's principle of purpose limitation
4 (which comes at odds with the autonomous processing of personal data through AI in the CAV, which may be based on a (re)interpretation of goals, or, possibly, a shift in focus from the original goal for which personal data was collected). Additionally, the overall need for relatively large datasets to properly train and leverage AI functionalities leads to conflicts with the principle of data minimization.
5 When applied to AI systems, the requirement of data protection by design and by default also presents difficulties, as data protection by default is possible only when the necessary personal data is processed for a specific purpose.
6 Moreover, the ePrivacy Directive has been interpreted by European Supervisory Authorities - notably, the European Data Protection Board (EDPB)
7 - as requiring a company wishing to store or access information stored within a CAV to obtain specific consent from CAV users for these specific activities. Furthermore, an additional legal basis must be determined (possibly necessitating those companies to make a double request for consent) for any subsequent use of the information stored or accessed, such as the analysis of telematics data collected from a CAV. This interpretation creates challenges at the technical and legal levels in particular where the legal basis defined for subsequent use of CAV information is not consent, such as in the case of pay-as-you-drive insurance, where the contract entered into between the CAV user and an insurance company serves as a legal basis for the processing of their personal data. A conflict between the legal basis used for information storage/access - consent, which must be freely withdrawable under the GDPR
8 - and the legal basis used for information use - e.g., performance of a contract, which will typically not be compatible with the possibility for the CAV user to freely prevent the insurance company from continuing to process their personal data emerges in this context. Concerns from the data security
9 perspective are also highly relevant, notably due to the lack of shared security standards in the CAV domain and the increase of potential attack surface caused by the interconnection of different CAV components.
10 On the other hand, while European data protection legislation such as the GDPR, ePrivacy and NISD provide a minimum level of legal safeguards for citizens, they may not suffice to maximize CAV benefits for users while minimizing their potential negative impact on society.
11 In order to properly and comprehensively address the risks brought about by CAVs, ethics
12 and human rights concerns must therefore take a central role in every stage of the CAV development lifecycle, embedding the notions of fairness, transparency, and security into design processes. Transparency
13 is situated between the legal and ethical dimensions and is challenged by the complexity of AI systems, as well as the inherent autonomy and flexibility of automated decision-making, and is key in the development of the framework as a prerequisite for trustworthy, ethical, and fair data processing. This paper explores the closely linked legal principles and ethical aspects that should be taken into consideration by stakeholders in the CAV landscape and provides a roadmap to be used by CAV researchers, developers, and all those who seek to create and implement technologies to carry out data processing activities within such domain in a compliant, fair and trustworthy manner. As a result of the inherent link between the legal and ethical concerns, the authors will present a holistic approach to design and development which is intended to overcome the challenges posed to European personal data protection legal principles and obligations, by involving ethics and fairness. This approach, which goes beyond minimum legal requirements and proposes the application of a multidisciplinary framework, can be defined as Data Protection as a Corporate Social Responsibility in accordance to the Maastricht methodology in this domain.
AB - Emerging technologies and tools based on Artificial Intelligence (AI), such as Connected and automated vehicles (CAVs), present novel regulatory and legal compliance challenges while at the same time raising important questions with respect to ethics and transparency. On the one hand, CAVs bring to light theoretical and practical challenges to the implementation of the multi-dimensional obligations of the current European personal data protection legal framework, including the General Data Protection Regulation (GDPR), the ePrivacy Directive,
1 and where applicable, the Directive for a high common level of security and information systems (NIS Directive or NISD).
2 As mere examples, CAV developers currently face multiple legal hurdles to overcome, including the necessity to fulfil controller and/or processor obligations in complex data processing scenarios
3 and tensions with the GDPR's principle of purpose limitation
4 (which comes at odds with the autonomous processing of personal data through AI in the CAV, which may be based on a (re)interpretation of goals, or, possibly, a shift in focus from the original goal for which personal data was collected). Additionally, the overall need for relatively large datasets to properly train and leverage AI functionalities leads to conflicts with the principle of data minimization.
5 When applied to AI systems, the requirement of data protection by design and by default also presents difficulties, as data protection by default is possible only when the necessary personal data is processed for a specific purpose.
6 Moreover, the ePrivacy Directive has been interpreted by European Supervisory Authorities - notably, the European Data Protection Board (EDPB)
7 - as requiring a company wishing to store or access information stored within a CAV to obtain specific consent from CAV users for these specific activities. Furthermore, an additional legal basis must be determined (possibly necessitating those companies to make a double request for consent) for any subsequent use of the information stored or accessed, such as the analysis of telematics data collected from a CAV. This interpretation creates challenges at the technical and legal levels in particular where the legal basis defined for subsequent use of CAV information is not consent, such as in the case of pay-as-you-drive insurance, where the contract entered into between the CAV user and an insurance company serves as a legal basis for the processing of their personal data. A conflict between the legal basis used for information storage/access - consent, which must be freely withdrawable under the GDPR
8 - and the legal basis used for information use - e.g., performance of a contract, which will typically not be compatible with the possibility for the CAV user to freely prevent the insurance company from continuing to process their personal data emerges in this context. Concerns from the data security
9 perspective are also highly relevant, notably due to the lack of shared security standards in the CAV domain and the increase of potential attack surface caused by the interconnection of different CAV components.
10 On the other hand, while European data protection legislation such as the GDPR, ePrivacy and NISD provide a minimum level of legal safeguards for citizens, they may not suffice to maximize CAV benefits for users while minimizing their potential negative impact on society.
11 In order to properly and comprehensively address the risks brought about by CAVs, ethics
12 and human rights concerns must therefore take a central role in every stage of the CAV development lifecycle, embedding the notions of fairness, transparency, and security into design processes. Transparency
13 is situated between the legal and ethical dimensions and is challenged by the complexity of AI systems, as well as the inherent autonomy and flexibility of automated decision-making, and is key in the development of the framework as a prerequisite for trustworthy, ethical, and fair data processing. This paper explores the closely linked legal principles and ethical aspects that should be taken into consideration by stakeholders in the CAV landscape and provides a roadmap to be used by CAV researchers, developers, and all those who seek to create and implement technologies to carry out data processing activities within such domain in a compliant, fair and trustworthy manner. As a result of the inherent link between the legal and ethical concerns, the authors will present a holistic approach to design and development which is intended to overcome the challenges posed to European personal data protection legal principles and obligations, by involving ethics and fairness. This approach, which goes beyond minimum legal requirements and proposes the application of a multidisciplinary framework, can be defined as Data Protection as a Corporate Social Responsibility in accordance to the Maastricht methodology in this domain.
KW - Artificial intelligence
KW - Automated vehicles
KW - Connected
KW - Corporate social responsibility
KW - Data protection
M3 - Conference article in proceeding
VL - 2844
T3 - CEUR Workshop Proceedings
SP - 139
EP - 151
BT - Workshops of the 11th EETN Conference on Artificial Intelligence 2020 (SETN2020 Workshops)
A2 - Giannakopoulos, George
A2 - Galiotou, Eleni
A2 - Vasillas, Nikolaos
Y2 - 2 September 2020 through 4 September 2020
ER -