Trustworthy AI
a cooperative approach
DOI:
https://doi.org/10.26116/techreg.2023.006Keywords:
Artificial Intelligence, Trustworthy AI, Artificial Intelligence Act, Automated Decision Making, Administrative LawAbstract
The EU has proposed harmonized rules on artificial intelligence (AI Act) and a directive on adapting non-contractual civil liability rules to AI (AI liability directive) due to increased demand for trustworthy AI. However, the concept of trustworthy AI is unspecific, covering various desired characteristics such as safety, transparency, and accountability. Trustworthiness requires a specific contextual setting that involves human interaction with AI technology, and simply involving humans in decision processes does not guarantee trustworthy outcomes. In this paper, the authors argue for an informed notion of what is meant for a system to be trustworthy and examine the concept of trust, highlighting its reliance on a specific relationship between humans that cannot be strictly transmuted into a relationship between humans and machines. They outline a trust-based model for a cooperative approach to AI and provide an example of what that might look like.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2023 Jacob Livingston Slosser, Birgit Aasa, Henrik Palmer Olsen
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Submissions are published under a Creative Commons BY-NC-ND license.’