• Artificial Intelligence's Errors: Searching For A Guilty Party

    July 17, 2023
    No Comments

    Please Follow us on GabMinds, TelegramRumble, Gab TV, Truth Social, Gettr, Twitter

    One, two, three, go! 1. Google Photos image processing software mistakenly labeled a black couple as "gorillas"; 2 Facebook's automatic language translation software incorrectly translated an Arabic post saying "Good morning" into Hebrew saying "hurt them," leading to the arrest of a Palestinian man in Beitar Illit, Israel; 3. Bing Chat Response Cited ChatGPT Disinformation Example. Absolutely, there are more than these three cases of AI malfunction, actually, there are thousands of them -- and their number is increasing by the second. Accordingly, a critical issue arises: who should be responsible for artificial intelligence or algorithm malfunctions: the programmer, the manufacturer, or the user?

    ‘NO AD’ subscription for CDM!  Sign up here and support real investigative journalism and help save the republic!  

    It must be you. It is a fact that everywhere liability legal frameworks are clearly unprepared for AI. They were conceived and enacted when humans caused most harm, with or without intention, but always with direct human input, as one author accurately put it up. Not surprisingly, current liability inquiries seem to be focused on the person who uses an AI algorithm.

    There's a little black box, holding all the truth about us. However, AI’s malfunctions do not always derive from the users´ own faults. So, let's go to the programmers: the difficulty in putting the blame on machines lies in the impenetrability of the AI decision-making process. That precisely is the case of the black-box AI models, which involve algorithms that are too complex to understand even for the programmer because he does not explicitly program the algorithm. Thus, we are at the starting point again: who is responsible?

    There is more than one answer. As always happens when such a complex issue is posed, its approach should be complex too. A solution is contemplated by the European Community legal framework -- part of it is not yet enacted. It sets a fourfold approach: First, a manufacturer of an AI system should be liable for damages caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market. Second, an operator of an AI system that carries an increased risk of harm to others, for example, AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation. In addition, a service provider ensures the technical framework has a higher degree of control than the owner or user of an actual product or service equipped with AI. This should be taken into account in determining who primarily operates the technology. Lastly, a person using a technology that does not pose an increased risk of harm to others should still be required to abide by duties to properly select, operate, monitor, and maintain the technology in use. Upon its failure to comply with these duties, it should be liable for breach of such duties if at fault. Besides, the proposed Artificial intelligence liability directive creates a presumption of causality that gives claimants seeking compensation for damage caused by AI systems a more reasonable burden of proof and a chance of a successful liability claim.

    In America, has been suggested that a potential response is to hold everyone involved in the use and implementation of the AI system accountable. It has been argued too that cases involving AI must be submitted before a tech-proficient Court -- does such a thing always exist?

    I do not know much. European regulation and American suggestions notwithstanding, the issue concerning the legal liability derived from AI misuse is as unresolved as fascinating. Beyond the reaches of regulation rises another most worrying uncertainty: could judges and jurors, with their limited tech knowledge handle this matter? Could we? AI is an uncharted terrain; AI liability is only one of its compartments.

    ”AI is already causing unintended harm. What happens when it falls into the wrong hands?”, By David Evan Harris, https://www.theguardian.com/commentisfree/2023/jun/16/ai-new-laws-powerful-open-source-tools-meta

    “Artificial Intelligence & Liability – Whodunit?” by Pieter De Grauwe https://www.gevers.eu/blog/trademarks/artificial-intelligence-and-liability-whodunit/

    “Commentary: Who should we hold responsible when AI goes wrong?”, by Anantharaman Muralidharan https://www.channelnewsasia.com/commentary/artificial-intelligence-liable-responsible-things-go-wrong-malfunction-user-trust-3241151

    “When AI in healthcare goes wrong, who is responsible?”, by Olivia Goldhill

    https://qz.com/1905712/when-ai-in-healthcare-goes-wrong-who-is-responsible-2

    “AI and liability 2023: a guide to liability rules for Artificial Intelligence in the UK and EU”, by Katie Simmonds, Jenny Gibbs and Amber Collins for Womble Bond Dickinson (UK) https://www.lexology.com/library/detail.aspx?g=ca225bd9-a372-4f01-abcf-85c8fd4e5704

    “Who Is Liable When AI Kills?”, by George Maliha, Ravi B. Parikh. https://www.scientificamerican.com/article/who-is-liable-when-ai-kills1/

    “AI Incident Database” https://incidentdatabase.ai/cite/470/

    “Black Box”, by Stan Walker

    ·       Robert F. Kennedy Jr. Describes Ties From Pre-911 To Bioweapons And Pandemic

    ·       Organization Forms In Cherokee County – To Move Away From GOP?

    SHARE THIS ARTICLE
              

    Author

    Martín Francisco Elizalde

    Martin Elizalde is an Argentine lawyer based in Buenos Aires. His areas of practice include forensic analysis, cyber security and artificial intelligence.

    Continue Reading

    guest

    0 Comments
    Inline Feedbacks
    View all comments

    Follow Us

  • Miami has long suffered from a lack of opposing opinions to the corporate media narrative. We aim to create Miami's and Florida's premier investigative newspaper and will bring the truth, no matter where that truth lands
    Copyright © 2024 The Miami Independent
    contact@creativedestruction
    media.com
    magnifier