AI–human interaction: Soft law considerations and application
Catherine Casey,
Ariana Dindiyal and
James A. Sherer
Additional contact information
Catherine Casey: Chief Growth Officer, Reveal Brainspace, USA
Ariana Dindiyal: Associate, BakerHosterler, USA
James A. Sherer: Partner, BakerHosterler, USA
Journal of AI, Robotics & Workplace Automation, 2022, vol. 1, issue 4, 360-370
Abstract:
This paper defines the utilisation of ‘soft law’ concepts and structures generally, considers the application of soft law to the perceived gap between artificial intelligence (AI) approaches and normal human behaviours, and subsequently explores the challenges presented by this soft law application. The authors submit that AI is only becoming more prevalent, and increased uses of this technology logically create greater opportunities for ‘friction’ when human norms and AI processes intersect — especially those processes that seek to replace human actions, albeit inconsistently and imperfectly. This paper considers that friction as inevitable, but instead of offering wholesale objections or legal requirement application to AI’s imperfect intrusions into humans’ daily lives, the authors consider ways in which soft law can smooth the path to where we are collectively headed. As human–computer interaction increases, the true role of AI and its back-and-forth with humans on a day-to-day basis is itself rapidly developing into a singular field of study. And while AI has undoubtedly had positive effects on society that lead to efficient outcomes, the development of AI has also presented challenges and risks to that which we consider ‘human’ — risks that call for appropriate protections. To address those concepts, this paper establishes definitions to clarify the discussion and its focus on discrete entities; examines the history of human interaction with AI; evaluates the (in)famous Turing Test; and considers why a gap or ‘uncanny valley’ between normal human behaviour and current AI approaches is unsettling and potentially problematic. It also considers why certain types of disclosure regarding AI matter are appropriate and can assist in addressing the problems that may arise when AI attempts to function as a replacement for ‘human’ activities. Finally, it examines how soft law factors into the equation, filling a need and potentially becoming a necessity. It considers the use-case of how one US legislative body initiated such a process by addressing problems associated with AI and submits that there is a need for additional soft law efforts — one that will persist as AI becomes increasingly important to daily life. In sum, the paper considers whether the uncanny valley is not a challenge so much as a barrier to protect us, and whether soft law might help create or maintain that protection.
Keywords: artificial intelligence (AI); soft law; Turing Test; uncanny valley; chatbot (search for similar items in EconPapers)
JEL-codes: G2 M15 (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:
Downloads: (external link)
https://hstalks.com/article/7198/download/ (application/pdf)
https://hstalks.com/article/7198/ (text/html)
Requires a paid subscription for full access.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:aza:airwa0:y:2022:v:1:i:4:p:360-370
Access Statistics for this article
More articles in Journal of AI, Robotics & Workplace Automation from Henry Stewart Publications
Bibliographic data for series maintained by Henry Stewart Talks ().