Web
Analytics
DLI | Research | Projects
top of page
Projects

Active Projects

Talk_2.jpg

(SaTC) Accountable Information Use: Privacy & Fairness in Decision-Making Systems

Increasingly, decisions and actions affecting people's lives are determined by automated systems processing personal data. Excitement about these systems has been accompanied by serious concerns about their opacity and the threats that they pose to privacy, fairness, and other values. Recognizing these concerns, the investigators seek to make real-world automated decision-making systems accountable for privacy and fairness by enabling them to detect and explain violations of these values. The technical work is informed by, and applied to, online advertising, healthcare, and criminal justice, in collaboration with and as advised by domain experts.

Addressing privacy and fairness in decision systems requires providing formal definitional frameworks and practical system designs. The investigators provide new notions of privacy and fairness that deal with both protected information itself and proxies for it, while handling context-dependent, normative definitions of violations. A fundamental tension they address pits the access given to auditors of a system against the system owners' intellectual property protections and the confidentiality of the personal data used by the system. The investigators decompose such auditing into stages, where the level of access granted to an auditor is increased when potential (but not explainable) violations of privacy or fairness are detected. Workshops and public releases of code and data amplify the investigators' interactions with policy makers and other stakeholders. Their partnerships with outreach organizations encourage diversity. Read more >

Researchers: Helen Nissenbaum | Tom Ristenpart

Talk_2.jpg

Computational Histories Project | Mixed Reality in Civic & Rural Spaces

The Digital Life Initiative developed the Computational Histories Project to examine the ways in which emergent technologies can redress historical asymmetries in urban and rural environments. While Eleanor Roosevelt acts as a symbolic unifier across several research tracks, it is her legacy of public service and campaigning for marginalized voices that impels all technical, creative, and pedagogical ambitions within the project.

Uniting leaders from the fields of tech, human rights, education, museology, and dance, this initiative hopes to deploy web-mapping and augmented reality systems (i) to transform architectural surfaces, civic spaces, and landscapes into digital canvases for museum curators; and (ii) to examine the user interplay between archival material, data, gesture, and geolocation. In concert with these technical and creative ambitions are aims (iii) to design educational curricula that blurs disciplinary boundaries between computational, curatorial, and choreographic thinking; and (iv) to foster cross-campus initiatives with Cornell researchers in New York City, Ithaca, and Washington D.C. Read more >

Researchers: Michael Byrne

Talk_2.jpg

Understanding and Tearing Down Barriers to Cryptographic Solutions for Real-World Privacy Problems

Modern cryptography enables remarkably versatile uses of information while simultaneously maintaining (partial) secrecy of that information. Techniques such as secure multiparty computation (MPC) and homomorphic encryption (HE) have opened a vast realm of new possibilities to protect privacy online, enabling the design and development of previously impossible — sometimes, seemingly paradoxical — privacy-preserving services. However, adoption of cryptographic privacy enhancing technologies has been rare so far. Assumptions and hypotheses about poor usability, user and organizational unawareness, economic incentives and inefficiency have long been regarded as the complex network of interacting factors that prevent the adoption of these technologies. 

In this project we aim to go beyond these assumptions and hypotheses to provide an analytical framework that explains, firstly, whether or not cryptographic solutions provide a suitable solution to address real-world privacy problems we as a society currently face. Secondly, the framework will enable us to determine the institutional, organizational, commercial and technological arrangements that should be in place to promote the adoption and successful deployment of cryptographic solutions for privacy.

Researchers: Ero Balsa | Sunoo Park | Helen Nissenbaum

Talk_2.jpg

Establishing Frameworks and Protocols for Consequentialist Regulation of Technology

Technology companies are under significant and ongoing scrutiny, generating a range of proposed (but largely not implemented) regulatory responses. In that regulatory vacuum, technology companies have largely been governed and limited only by internal regulatory policies which follow internally-generated rules-based regimes suitable for review and adjudication by "trust and safety" teams. These procedures are generally independent from the product, technical development, and growth teams of companies. And while rules-based regimes are suitable for some problems these companies face, we argue that the centralization of these strategies serve to obfuscate from necessarily limitations to products themselves. Instead, we pose an alternative set of methods for assessing and limiting harms directly, and a set of principles and mechanisms for building this process into product development.

 

In this research, we evaluate existing frameworks for regulation, and offer a new alternative for what types of approaches would be sufficient to address the needs of harm mitigation among at-risk populations. We explore implementation opportunities for operationalizing these types of approaches, and what it would take for meaningful limitations to be implemented. We further explore how government and third parties could participate in a functional regime in coordination with technology companies.

Researchers: Nathaniel Lubin | Thomas Gilbert

Talk_2.jpg

Risk Assessment in Privacy Policy and Engineering: From Theory to Practice

Privacy risk assessments have been touted as a mechanism to guide the design and development of privacy-preserving systems and services. Privacy risk assessments play a prominent role in Europe’s General Data Protection Regulation (GDPR), where they are proposed as a method to balance the benefits of processing personal data against the negative impact on citizens’ rights. In the US, risk assessment is the centerpiece of NIST’s Privacy framework, a tool that similarly seeks to help organizations build products and services that protect individuals’ privacy. Privacy protection thus joins a growing list of domains where risk assessment has already been adopted, such as cybersecurity, finance, public health, life insurance, or environmental protection, among others. 


Strikingly, however, there is a dearth of concrete guidelines and actuarial models to assess privacy risk in a consistent and repeatable way. Existing guidelines are vague and arbitrary —rather than evidence-based—, lend themselves to wangling, and contribute to, as well as stem from, existing processes of legal endogeneity in privacy regulation. This disconnection echoes the challenges and limitations of privacy by design, the most popular doctrine to bridge privacy policy and engineering: the guiding principles make sense on paper, but they are hard to translate into system design. 

In this project we aim to identify and bridge the gaps between policy and practice inherent in current approaches to privacy risk assessment. Firstly, we adopt a critical stance to question the wide scope in which privacy risk assessments are being assumed to operate, identifying blind spots that both policymakers and practitioners must grapple with. Secondly, we incorporate risk assessments into established (academic) privacy engineering practice, leading to new insights into the (limited) role that risk assessment should play in privacy policy and engineering. 

Researchers: Ero Balsa

Talk_2.jpg

Securing Election Infrastructure (with Law, Technology, and Paper)

American elections currently rely on outdated and vulnerable technology. Computer science researchers have shown that voting machines and other election equipment used in many jurisdictions are plagued by serious security flaws, or even shipped with basic safeguards disabled—leaving the doors open to inexpensive, unsophisticated attacks and other failures which might go undetected. Making matters worse, it is unclear whether current law requires election authorities or companies to fix even the most egregious vulnerabilities in their systems, and whether voters have any recourse if they do not. This research project explores the role of (in)secure technology in US elections from the perspectives of both law and computer science, discussing legal protections for election system security, as well as ways that use of modern technology can help or harm election system security.

Researchers: Sunoo Park

Talk_2.jpg

Collaborating with Municipal Governments on Assessing the Digital Rights Impacts of Their Datafied Technologies

In this project, DLI Postdoctoral Fellow engaged with municipal governments to help advise, collaborate on, and evaluate how digital rights are integrated into city technologies. As a Fellow in the City of New York's Office of Technology and Innovation, Meg is working on two tools for supporting city agencies; the first is a Digital Rights Impact Report intended to foster public dialogue about IoT systems in use and their digital rights implications for New York City residents. She is leading outreach to digital rights and experiential experts to refine the draft document and the proposed process as a pilot this spring. Later this year, the team plans to share it as a model for agency use. The second project led by Dr. Eric Corbett, and aims to engage the public on their key questions on city IoT and how to effectively communicate the answers to these questions in the built environment. Meg also took part in creating the NYC AI Strategy which emphasized digital ethics and cross-sector collaboration.


As part of this collaboration with the City, Meg has also volunteered in the NYC Office of the Public Advocate with its Data Tech & Justice Group, particularly on a campaign to ban facial recognition technology in New York City called #BantheScan. My work at the Public Advocate’s office supported two public events under this campaign, including a #BantheScan Youth Forum that showcased local and international youth activism, and an event to convene Black faith leaders on the long term implications of AI.

Researchers: Meg Young | Phillip Ellison | Anne Hohman, Paul Rothman, & Neal Parikh (City of New York Office of Technology Innovation)

Projects

Talks & Panels

More

Publications

Recent Publications

Talk_3.jpg

Fast or Accurate? Governing Conflicting Goals in Highly Autonomous Vehicles

By A. Feder Cooper and Karen Levy

Colorado Technology Law Journal, 2022

Forthcoming

Talk_3.jpg

Making the Unaccountable Internet: The Changing Meaning of Accounting in the Early ARPANET

By A. Feder Cooper and Gili Vidan 

Under Submission, 2022

Read more >

Talk_3.jpg

Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning

By A. Feder Cooper, Benjamin Laufer, Emanuel Moss, and Helen Nissenbaum 

FAccT 2022

Read more >

Talk_3.jpg

How the Free Software and the IP Wars of the 1990s and 2000s Presaged Today’s Toxic, Concentrated Internet

By Elettra Bietti 

Promarket, 2022

Read more >

Talk_3.jpg

Whitepaper: Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

By Thomas Gilbert

CLTC, Berkley, 2022

Read more >

Talk_3.jpg

Introducing a Practice-Based Compliance Framework (PCF) for Addressing New Regulatory Challenges in the AI Field

By Mona Sloane and Emanuel Moss

CPI TechREG, 2022

Read more >

Talk_3.jpg

A New Proposed Law Could Actually Hold Big Tech Accountable for Its Algorithms

By Jacob Metcalf, Brittany Smith and Emanuel Moss

Slate, 2022

Read more >

Talk_3.jpg

Facebook Has a Superuser-Supremacy Problem

By Matthew Hindman, Nathaniel Lubin, and Trevor Davis

The Atlantic, 2022

Read more >

Talk_3.jpg

Conflict of Interest in AI Ethics: An Agonistic Path Forward

By Meg Young, Michael Katell, and P. M. Krafft

FAccT, 2022

Forthcoming

Talk_3.jpg

Understanding Local News Social Coverage and Engagement at Scale During the Covid-19 Pandemic

By Marianne Aubin Le Quere

ICWSM, 2022

Forthcoming

Talk_3.jpg

Four Years of FAccT (Fairness, Accountability and Transparency)

By Benjamin Laufer, Sameer Jain, A. Feder Cooper, Jon Kleinberg, and Hoda Heidari 

FAccT 2022

Forthcoming

Talk_3.jpg

Accountability in an Algorithmic Society

By A. Feder Cooper, Benjamin Laufer, Emmanuel Moss, and Helen Nissenbaum

FAccT 2022

Forthcoming 

Talk_3.jpg

Care Infrastructures for Digital Security in Intimate Partner Violence

By Emily Tseng, Mehrnaz Sabet, Rosanna Bellini, Harkiran Kaur Sodhi, Thomas Ristenpart and Nicola Dell

ACM CHI, 2022

Read more >

Talk_3.jpg

(Unmaking) as Agonism: Using Participatory Design with Youth to Surface Difference in an Urban Context

By Samar Sabie, Steven J. Jackson, Wendy Ju, and Tapan Parikh

ACM CHI, 2022

Read more >

Talk_3.jpg

The Material Consequences of "Chipification": The Case of Software-Embedded Cars

By MC Forelle

Big Data & Society, 2022

Forthcoming

Talk_3.jpg

Information Needs of Essential Workers During the Covid-19 Pandemic

By Marianne Aubin Le Quere

CSCW, 2022

Forthcoming

Talk_3.jpg

From Ethics Washing to Ethics Bashing: A Moral Philosophy View on Tech Ethics

By Elettra Bietti 

Journal of Social Computing, 2021

Read more >

Talk_3.jpg

Hyperparameter Optimization Is Deceiving Us, and How to Stop It

By A. Feder Cooper, Yucheng Lu, Jessica Zosa Forde, and Chris De Sa 

NeurIPS 2021

Read more >

Workshops

Workshops & Events

Talk_3.jpg

Yale Law School Freedom of Expression Scholars Conference

Conference co-organized by Elettra Bietti 

Yale University, April 30 - May1, 2022

Read more >

Talk_3.jpg

Concretizing the Material & Epistemological Practices of Unmaking in HCI

Workshop co-organized by Samar Sabie

CHI, April 21, 2022

Read more >

Talk_3.jpg

Triangulating Race, Capital, and Technology

Workshop co-organized by Cindy Lin, Rachel Kuo, Yuchen Chen, and Seyram Avle

CHI, April 30 - May1, 2022

Read more >

Talk_3.jpg

Toward a Material Ethics of Computing

Workshop co-organized by Cindy Lin, Jen Liu, Anne Pasek, Robert Soden, Lace Padilla, Daniela Rosner, and Steve Jackson.

CHI, April 30, 2022

Read more >

Talk_3.jpg

The Costs of Recalibration: The Spatial, Material, and Labor-related Requirements for Recalibrating Smart Automotive Technologies

Abstract by MC Forelle accepted into "Toward a Material Ethics of Computing" CHI workshop

CHI, April 30, 2022

Read more >

Talk_3.jpg

Building Accountable and Transparent RL

Workshop co-organized by Thomas Gilbert

Reinforcement Learning and Decision-Making (RLDM 2022), Brown University, June, 2022

Read more >

Talks & Panels

Talk_3.jpg

Women in Data Science 

Panelist: By Marianne Aubin Le Quere

New York University, April 12, 2022

Read more >

Talk_3.jpg

Producing Personhood: How Designers Perceive and Program Voice Assistant Devices

Speakers: Margot Hanley and Hannah Wohl

American Sociological Association (ASA), Annual Meeting, August 5-9, 2022 

Read more >

Talk_3.jpg

(Re)uniting STEM and the Humanities: Toward Cultural and Critical Histories of Mathematics and Computing.

Speaker: Ellen Abraham

University of Texas Permian Basin Undergraduate Research Day, April 22, 2022

Read more >

Talk_3.jpg

Operationalizing Responsible AI

Panelist: Meg Young

Brookings Institution, April 5, 2022

Read more >

Talk_3.jpg

Remaking Ground Truth: From Field Observation to Weak Supervision

Speaker: Cindy Lin

Oxford Digital Ethnography Seminar @ Oxford Internet Institute, 21 Feb 2022

Read more >

Talk_3.jpg

From Principles to Practice: What Next for Algorithmic Impact Assessments?

Panelist: Emanuel Moss 

Ada Lovelace Institute, March 28, 2022 

Read more >

Talk_3.jpg

From Pixel to Point: A Story of Data Annotation Labor

Speaker: Cin Lin

Transmediale Festival, January 29, 2022

Read more >

Talks

 More DLI Accomplishments

Talk_3.jpg

DLI Members Receive Antimonopoly Grants

Received by Elettra Bietti and Madiha Z. Choksi

Economic Security Project, 2022

Read more >

Talk_3.jpg

Distinguished Dissertation Award

Received by Cindy Lin

ProQuest, 2022

Read more >

Talk_3.jpg

MSR Summer Internship

Margot Hanley received an internship with Nancy Baym at Social Media Collective

Microsoft Research, 2022

Read more >

Talk_3.jpg

Prestigious Received Grant From the Notre Dame-IBM Tech Ethics Lab

Received by Thomas Gilbert, Margot Hanley, and Karen Levy

January 28, 2022

Read more >

Talk_3.jpg

Bridging the Gap Between Best Practices and Software Development Workflows

Meg Young contributed to Human-AI Experience “HAX” Toolkit for Microsoft

Microsoft, 2022

Read more >

Other

Previous Publications & Talks

Privacy from an Ethical Perspective

1 / 3

Please reload

Research News

For more information on the latest conference activities, publication updates, and accolades of the DLI community, visit our news section or follow us on Twitter

bottom of page