Web
Analytics
top of page
Writer's pictureJessie G Taft

DL Seminar | Strange Loops: Human Involvement in Algorithmic Decision-Making



By Sandra Ebirim and Nick Sempere | MS Students, Cornell Tech

At the Digital Life Seminar on October 3rd, 2019, Kiel Brennan-Marquez, Karen Levy, and Daniel Susser presented their recent work, which unpacks and analyzes the impact of human involvement in automated decision making. They particularly focused on the idea of the appearance of human involvement, or the lack thereof, compared to the reality.

This dichotomy between the appearance and actuality of human involvement in artificial intelligence systems is increasingly relevant as these systems become ubiquitous. A large body of the work involves technology for healthcare, a space where considering the nature of this dichotomy is particularly valuable. Using a domain like healthcare is also a good way to consider the concrete implications of the ideas that were presented.


Brennan-Marquez, Levy, and Susser define a human in the loop as a human agent exercising some degree of meaningful influence, up to and including override, over the disposition of particular cases. Most of the debate surrounding this topic has been focused on the extent to which a human in the loop proves beneficial to society. Contrastingly, Brennan-Marquez, Levy, and Susser focused their work on whether people believe there is a human in the loop or not, and the resulting impact these perceptions can have. They defined four main categories as follows:

  • Manifest Humanity: There is a human in the loop and this is generally known.

  • Fully Automated System: There is no human in the loop and this is generally known.

  • Skeuomorphic Humanity: There is no human in the loop but it is assumed that there is.

  • Faux Automation: There is a human in the loop but it is assumed that there isn’t.

Potential instances of each category within healthcare are not difficult to imagine. Consider an artificial intelligence system whose role it is to interpret laboratory data. Such a system might be put in place when the indicator for a lab test is difficult for a human to accurately read or when the test returns extremely complicated data that is difficult to interpret. If that system was in fact fully autonomous and was thought of as such, then perception and actually would be aligned. If, on the other hand, there was a human involved somewhere in the process (perhaps as a failsafe or in data entry), the system would be an instance of faux automation.


In a completely different case, this same laboratory testing system may have been branded or assumed to have a human ensuring high quality results. If that were indeed the case then it would constitute manifest humanity. If not, the suggestion of human involvement would be nothing more than a skeuomorph.


The majority of the work that the three authors present focuses on the misalignments between perception and reality that present themselves in the cases of skeuomorphic humanity and faux automation and the impact that these can have on the individual and institutional level.


At the individual level, both skeuomorphic humanity and faux automation undermines the individuals capacity to understand and reason about automated systems. Skeuomorphic humanity confuses the individual about how to intervene in decision making processes with which they are concerned while, in faux automation, the source of the problem that an individual may be facing is misconstrued.


Given that so many healthcare technologies influence the patient in some capacity or another, the individual perspective is critical. The patient’s perception on whether or not a human is involved in an automated process will be an important factor in consenting to using such systems and to their understanding of what a system’s outputs truly mean. A system that patients incorrectly perceive to involve human influence may make them more comfortable trusting such a system. That trust, of course, would be misplaced.


Skeuomorphic humanity also poses concerns for a patient’s dignity; as the paper examines: “[a] good example is the delivery of momentous information, as in recent debates over whether doctors should deliver grave prognoses via robot” (Brennan-Marquez, Levy, and Susser, 12). It would be easy for the designers of a robot like this to see the value in making it feel or appear human. It may, for instance, have a human voice. However, a skeuomorphic misalignment may lead the patient to think that the robot is being operated to some extent by a human. Would this affect the way a patient might respond to the robot? Receiving a grave prognosis is a deeply emotional experience. Leading a patient to believe that they can confide in a robot is questionable, to say the least.


At the institutional level, the misalignment undermines the public’s ability to reason about whether or not to use automated systems in the first place. Skeuomorphic humanity has the potential to lead society into accepting inadvisable forms of automation without fully appreciating its potential costs. Faux automation deceives society’s perception of the potential of automation as a whole.


Naturally, high-regulation domains would be impacted by these institutional concerns. Healthcare is such a domain. Institutional adoption of misaligned technologies in hospitals, laboratories, in drug production, or in any other medical environment harms the public’s ability to develop an informed opinion on those technologies. Stakeholders such as patients, healthcare providers, and policymakers are left at a reduced capacity for evaluating misaligned systems too. The question of how and when to adopt automated medical technologies becomes deeply unclear.


The work concludes with a discussion of how to address cases of misalignment. Its overarching argument is, for both skeuomorphic humanity and faux automation, that the best solution is to move to either full automation or manifest humanity in almost all cases. Automated systems are being introduced into the medical ecosystem at a rapid pace. Such a density of new technologies, many of which are relatively young and under continued development, means that misalignments are more likely than not. It is critical, then, that we correct the misalignments as soon as possible.

5 Comments


Gareth Carter
Gareth Carter
Nov 18

Not everything can be data driven these days. The strange loop of addiction rehab has to be more human than algorithmic. For more on the best rehabilitation team in Joburg, see here: ChangesRehab.co.za

Like

Fassbender Lewis
Fassbender Lewis
Sep 02

Long wallets provide superior shop the best RFID wallets in Australia function and capacity compared to regular bifolds or trifolds, and so are counted among the top choices for the wealthy. Igor Monte is the co-founder of Von Baer. He's an expert in all things premium leather, from being an end-user right up to the design and manufacturing process.

Like

Hayek Weaver
Hayek Weaver
Apr 05

Creative Crafts is designed to DIY Gnomes introduce crafts as an art form focusing on 3-dimensional art-making. A variety of media, tools, techniques, and processes will be explored to enrich and inspire artists. Students will learn to use mixed media materials such as plaster, clay, wood, cardboard, wire, recycled materials etc.

Like

doncheadle892
Mar 30

The funeral is typically Al Khidmat  held outside the mosque, in a location such as a prayer room, community square, or courtyard, where members of the community may gather. The body and all attendees are all turned to face Mecca, which is the holy center of Islam. Funeral prayers are led by the Imam, the holy leader.

Like

harasetsuko0
Mar 20

There is no law explicitly stating mobile casino login that online gambling is illegal, and even though the Act tries to put a blanket ban on all kinds of gambling, there is not enough substantial material to know what a game of skill is and what is a game of chance to declare online gambling illegal.


Like
bottom of page