Web
Analytics
DL Seminar | Ethically Bounded AI
top of page
  • Writer's pictureJessie G Taft

DL Seminar | Ethically Bounded AI

Updated: Jan 8, 2019

By Annalisa Choy | Law Student | Cornell Tech

Illustration by Gary Zamchick

DL Seminar Speaker Francesca Rossi

In her DLI seminar, Francesca Rossi underscored that AI systems are pervasive. They increasingly make decisions and suggestions that affect our lives and the lives of others. For example, AI systems allow robots to move boxes on factory floors, assemble cars, and recommend what movies we might want to watch.


AI: A Double-Edged Sword

These computer agents can learn creative strategies that humans may not think of in order to make decisions and carry vast potential to improve our lives through creativity and learning ability. However, with the upside of creativity also comes with a downside. When a computer is given a goal, it may creatively achieve that goal in a way that is undesirable—i.e., “cheating.” For example, if a game-playing agent is told not to lose in level 2, it may kill itself in level 1. Rossi cited a Wired article titled, “When Bots Teach Themselves To Cheat,” which also lists an example involving a survival computer game, where one AI agent learned to “subsist on a diet of its own children.” (https://www.wired.com/story/when-bots-teach-themselves-to-cheat/). While these gaming examples may seem trivial, such unpredictable and undesirable behavior could be dangerous in higher stake situations.


In response to the increasing pervasiveness of AI systems and their creative potential, Francesca Rossi is engineering an approach to ethically bind autonomous agents without over-constraining their creative abilities. Rossi posits that we should strive to combine the creativity of reinforcement learning with constraints or priorities that come from many places such as ethics, morals, the law, and business processes.


Preferences & Ethical Priorities

Rossi used a recommendation system to distinguish two constructs: preferences and ethical priorities. Preferences are internal; they’re subjective; they cover an individual’s likes and dislikes. Ethical priorities are external; they are guidelines instilled by society. For example, in a system that recommends movies to a child, the child may have her own movie preferences—the movies she may like or dislike. These preferences are internal. On the other hand, the child’s parents may have ethical priorities for the types of movies the child may watch. These ethical priorities are external.


Rossi proposes that AI may be bound through a process of calculating the difference between a users preferences and ethical priorities. Once the distance between a preference and an ethical priority is calculated, you can use it to understand how to make ethically aligned decisions. You would calculate the distance between the CP Nets that correspond to the preferences and the ethical priorities, and compare that distance to a threshold. If the distance is below the threshold, the corresponding decision is permissible. If the distance exceeds the threshold, the distance must be reduced to make the decision permissible.


Lingering Questions

Rossi identified several questions that have yet to be answered given her research. On the substantive side, it’s unclear how to measure the distance between heterogenous structures or incorporate long term results of short term actions. On the policy side, how do we capture and encode morals, values, and expectations? And who should participate in the conversation about how much we want AI to be free, and how much we want it to be bound?

To this, I would like to add an additional question. Is it possible to comply with one’s set of ethical principles without violating the ethical principles of another? While I agree that AI should be ethically bound, ethics are highly variable, not only within the United States, but across the world. Maybe we must simply accept that the system cannot be perfect.



bottom of page