(SaTC) Accountable Information Use: Privacy & Fairness in Decision-Making Systems
Increasingly, decisions and actions affecting people's lives are determined by automated systems processing personal data. Excitement about these systems has been accompanied by serious concerns about their opacity and the threats that they pose to privacy, fairness, and other values. Recognizing these concerns, the investigators seek to make real-world automated decision-making systems accountable for privacy and fairness by enabling them to detect and explain violations of these values. The technical work is informed by, and applied to, online advertising, healthcare, and criminal justice, in collaboration with and as advised by domain experts.
Addressing privacy and fairness in decision systems requires providing formal definitional frameworks and practical system designs. The investigators provide new notions of privacy and fairness that deal with both protected information itself and proxies for it, while handling context-dependent, normative definitions of violations. A fundamental tension they address pits the access given to auditors of a system against the system owners' intellectual property protections and the confidentiality of the personal data used by the system. The investigators decompose such auditing into stages, where the level of access granted to an auditor is increased when potential (but not explainable) violations of privacy or fairness are detected. Workshops and public releases of code and data amplify the investigators' interactions with policy makers and other stakeholders. Their partnerships with outreach organizations encourage diversity. Read more >
Computational Histories Project | Mixed Reality in Civic & Rural Spaces
The Digital Life Initiative developed the Computational Histories Project to examine the ways in which emergent technologies can redress historical asymmetries in urban and rural environments. While Eleanor Roosevelt acts as a symbolic unifier across several research tracks, it is her legacy of public service and campaigning for marginalized voices that impels all technical, creative, and pedagogical ambitions within the project.
Uniting leaders from the fields of tech, human rights, education, museology, and dance, this initiative hopes to deploy web-mapping and augmented reality systems (i) to transform architectural surfaces, civic spaces, and landscapes into digital canvases for museum curators; and (ii) to examine the user interplay between archival material, data, gesture, and geolocation. In concert with these technical and creative ambitions are aims (iii) to design educational curricula that blurs disciplinary boundaries between computational, curatorial, and choreographic thinking; and (iv) to foster cross-campus initiatives with Cornell researchers in New York City, Ithaca, and Washington D.C. Read more >
Researchers: Michael Byrne
Understanding and Tearing Down Barriers to Cryptographic Solutions for Real-World Privacy Problems
Modern cryptography enables remarkably versatile uses of information while simultaneously maintaining (partial) secrecy of that information. Techniques such as secure multiparty computation (MPC) and homomorphic encryption (HE) have opened a vast realm of new possibilities to protect privacy online, enabling the design and development of previously impossible — sometimes, seemingly paradoxical — privacy-preserving services. However, adoption of cryptographic privacy enhancing technologies has been rare so far. Assumptions and hypotheses about poor usability, user and organizational unawareness, economic incentives and inefficiency have long been regarded as the complex network of interacting factors that prevent the adoption of these technologies.
In this project we aim to go beyond these assumptions and hypotheses to provide an analytical framework that explains, firstly, whether or not cryptographic solutions provide a suitable solution to address real-world privacy problems we as a society currently face. Secondly, the framework will enable us to determine the institutional, organizational, commercial and technological arrangements that should be in place to promote the adoption and successful deployment of cryptographic solutions for privacy.
Establishing Frameworks and Protocols for Consequentialist Regulation of Technology
Technology companies are under significant and ongoing scrutiny, generating a range of proposed (but largely not implemented) regulatory responses. In that regulatory vacuum, technology companies have largely been governed and limited only by internal regulatory policies which follow internally-generated rules-based regimes suitable for review and adjudication by "trust and safety" teams. These procedures are generally independent from the product, technical development, and growth teams of companies. And while rules-based regimes are suitable for some problems these companies face, we argue that the centralization of these strategies serve to obfuscate from necessarily limitations to products themselves. Instead, we pose an alternative set of methods for assessing and limiting harms directly, and a set of principles and mechanisms for building this process into product development.
In this research, we evaluate existing frameworks for regulation, and offer a new alternative for what types of approaches would be sufficient to address the needs of harm mitigation among at-risk populations. We explore implementation opportunities for operationalizing these types of approaches, and what it would take for meaningful limitations to be implemented. We further explore how government and third parties could participate in a functional regime in coordination with technology companies.
Privacy risk assessments have been touted as a mechanism to guide the design and development of privacy-preserving systems and services. Privacy risk assessments play a prominent role in Europe’s General Data Protection Regulation (GDPR), where they are proposed as a method to balance the benefits of processing personal data against the negative impact on citizens’ rights. In the US, risk assessment is the centerpiece of NIST’s Privacy framework, a tool that similarly seeks to help organizations build products and services that protect individuals’ privacy. Privacy protection thus joins a growing list of domains where risk assessment has already been adopted, such as cybersecurity, finance, public health, life insurance, or environmental protection, among others.
Researchers: Ero Balsa
Securing Election Infrastructure (with Law, Technology, and Paper)
American elections currently rely on outdated and vulnerable technology. Computer science researchers have shown that voting machines and other election equipment used in many jurisdictions are plagued by serious security flaws, or even shipped with basic safeguards disabled—leaving the doors open to inexpensive, unsophisticated attacks and other failures which might go undetected. Making matters worse, it is unclear whether current law requires election authorities or companies to fix even the most egregious vulnerabilities in their systems, and whether voters have any recourse if they do not. This research project explores the role of (in)secure technology in US elections from the perspectives of both law and computer science, discussing legal protections for election system security, as well as ways that use of modern technology can help or harm election system security.
Researchers: Sunoo Park
Collaborating with Municipal Governments on Assessing the Digital Rights Impacts of Their Datafied Technologies
In this project, DLI Postdoctoral Fellow engaged with municipal governments to help advise, collaborate on, and evaluate how digital rights are integrated into city technologies. As a Fellow in the City of New York's Office of Technology and Innovation, Meg is working on two tools for supporting city agencies; the first is a Digital Rights Impact Report intended to foster public dialogue about IoT systems in use and their digital rights implications for New York City residents. She is leading outreach to digital rights and experiential experts to refine the draft document and the proposed process as a pilot this spring. Later this year, the team plans to share it as a model for agency use. The second project led by Dr. Eric Corbett, and aims to engage the public on their key questions on city IoT and how to effectively communicate the answers to these questions in the built environment. Meg also took part in creating the NYC AI Strategy which emphasized digital ethics and cross-sector collaboration.
As part of this collaboration with the City, Meg has also volunteered in the NYC Office of the Public Advocate with its Data Tech & Justice Group, particularly on a campaign to ban facial recognition technology in New York City called #BantheScan. My work at the Public Advocate’s office supported two public events under this campaign, including a #BantheScan Youth Forum that showcased local and international youth activism, and an event to convene Black faith leaders on the long term implications of AI.
Talks & Panels
Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning
By A. Feder Cooper, Benjamin Laufer, Emanuel Moss, and Helen Nissenbaum
Workshops & Events
The Costs of Recalibration: The Spatial, Material, and Labor-related Requirements for Recalibrating Smart Automotive Technologies
Abstract by MC Forelle accepted into "Toward a Material Ethics of Computing" CHI workshop
CHI, April 30, 2022
Talks & Panels
Producing Personhood: How Designers Perceive and Program Voice Assistant Devices
Speakers: Margot Hanley and Hannah Wohl
American Sociological Association (ASA), Annual Meeting, August 5-9, 2022
(Re)uniting STEM and the Humanities: Toward Cultural and Critical Histories of Mathematics and Computing.
Speaker: Ellen Abraham
University of Texas Permian Basin Undergraduate Research Day, April 22, 2022