On the legal responsibility of artificially intelligent agents
Addressing three misconceptions
DOI:
https://doi.org/10.26116/techreg.2021.003Keywords:
legal responsibility, moral agency, legal agency, liability, artificially intelligent agentsAbstract
This paper tackles three misconceptions regarding discussions of the legal responsibility of artificially intelligent entities: these are that they
(a) cannot be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.
(b) should not be held legally responsible for their actions, because they do not have the prerequisite characteristics to be ‘real agents’ and therefore cannot ‘really’ act.
(c) should not be held legally responsible for their actions, because to do so would allow other (human or corporate) agents to ‘hide’ behind the AI and escape responsibility that way, while they are the ones who should be held responsible.
(a) is a misconception not only because (positive) law is a social construct, but also because there is no such thing as ‘real’ agency. The latter is also the reason why (b) is misconceived. The arguments against misconceptions a and b imply that legal responsibility can be constructed in different ways, including those that hold both artificially intelligent and other (human or corporate) agents responsible (misconception c). Accordingly, this paper concludes that there is more flexibility in the construction of responsibility of artificially intelligent entities than is at times assumed. This offers more freedom to law- and policymakers, but also requires openness, creativity, and a clear normative vision of the aims they want to achieve.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2021 Antonia Waltermann
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Submissions are published under a Creative Commons BY-NC-ND license.’