Digital Life Seminar Archive

Thomas Krendl Gilbert
Cornell Tech
Are We All Miasmists Now? Parallels Between Recommender Systems and the History of Public Health
Attention capitalism has generated design processes and product development decisions that prioritize platform growth over all other considerations. To the extent limits have been placed on these incentives, interventions have primarily taken the form of content moderation. While moderation is important for what we call “acute harms,” societal-scale harms – such as negative effects on mental health and social trust – require new forms of institutional transparency and scientific investigation, which we group under the name accountability infrastructure.

Michal Gal
Center for Law and Technology, University of Haifa
Synthetic Data: Competitive and Human Dignity Implications
A data-generation revolution is underway. Up until recently, most of the data used for decision-making was collected from events that take place in the physical world ("real" data). Yet, it is forecasted that by 2024, 60% of data used to train artificial intelligence systems around the world will be synthetic (!). Synthetic data is artificially-generated data that has analytical value. For some purposes, synthetic datasets can replace real data by preserving or mimicking their properties. For some others it can complement real data in a way which increases their accuracy or their privacy protection. The importance of this data revolution for our economies and societies cannot be over-stated. It affects data access and data flows, potentially changing the competitive dynamics in markets where real data is not easily collected, and potentially affecting decision-making functions for many spheres of our lives.

Erin Miller
University of Southern California, Gould Law School
Quasi-State Action in First Amendment Theory
In this talk, Erin Miller will challenge the First Amendment orthodoxy that speech rights bind only the state. She will argue that the primary justification for the freedom of speech is to protect interests like autonomy, democracy, and knowledge from the kind of extraordinary power available to the state. If so, it applies with nearly equal force to any private agents with power over speech rivaling that of the state. Such a class of private agents, which she calls quasi-state agents, turns out to be a live possibility once we recognize that state power is more limited than it seems and can be broken down into multiple, equally threatening parts. They might include, for example, the largest social media platforms and powerful private employers.

Niva Elkin-Koren
Tel Aviv University
The By-design Approach Revisited: Lessons from Covid-19 Contact Tracing App
Niva Elkin-Koren is a Professor of Law at Tel-Aviv University Faculty of Law and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. She is a former Dean of University of Haifa Faculty of Law, and the founding director of the Center for Cyber, Law and Policy (CCLP) and of the Haifa Center for Law & Technology (HCLT).

Seth Lazar
Australian National University
Governing the Algorithmic City
A century ago, John Dewey observed that '[s]team and electricity have done more to alter the conditions under which men associate together than all the agencies which affected human relationships before our time'. In the last few decades, computing technologies have had a similar effect. Political philosophy's central task is to help us decide how to live together, by analysing our social relations, diagnosing their failings, and articulating ideals to guide their revision. But these profound social changes have left scarcely a dent in the model of social relations that most (analytical) political philosophers assume.

Judith Simon
Universität Hamburg
Dis/Trusting AI?
In this talk, Judith Simon will first turn to the question whether we can sensibly talk about trust in AI systems. Proposing a socio-technical view on AI, she will argue that we can trust AI systems, if we conceive them as systems consisting of networks of technologies and human actors, but that we should trust them if and only if they are trustworthy. Simon will conclude her talk by outlining some epistemic and ethical requirements for trustworthy systems and two caveats.

Kathleen Creel
Northeastern University
Picking on the Same Person: The Ethics of Algorithmic Monoculture
Human mistakes are inevitable, but fortunately heterogenous. Not so with machine decision-making. Using the same machine learning model for high-stakes decisions in many settings amplifies the strengths, weaknesses, biases, and idiosyncrasies of the original model. When the same person re-encounters the same model again and again, or models trained on the same dataset, she might be wrongly rejected again and again.

John Basl
Northeastern University
What We Owe to Decision-subjects: Beyond Transparency and Explanation in Automated Decision-Making
In this paper, we defend what we call the Interpretability Thesis which states that, in many contexts, decision-makers are morally obligated to avoid basing their decisions about how to treat decision-subjects on the outputs of non-interpretable ("black box") algorithmic decision systems. Others have defended this thesis, typically by arguing that we have duties of transparency to decision-subjects which require us to make certain information available to them. However, this approach to defending the interpretability thesis has been met with skepticism with skeptics worrying about the grounds of these duties of transparency and concerned that we hold algorithmic decisions to higher standards than human decision systems which also fail to meet duties of transparency.

Daniel Susser
Penn State’s College of Information Sciences and Technology
Exploitation and Platform Power
Big tech “exploits” us. This has become a common refrain among critics of digital platforms. It gives voice to a shared sense that technology firms are somehow mistreating people—taking advantage of us, extracting from us—in a way that other data-driven harms, such as surveillance and algorithmic bias, fail to capture. But what does “exploitation” entail, exactly, and how do platforms perpetrate it? What would a theory of digital exploitation add to existing discussions about platform governance?

Meg Young
Cornell Tech
Data Ownership is Not Dispositive: Data Access Conflicts in Public-Private Contracting Relationships
When firms contract with government agencies to provide services, they regularly assert that some subset of their work is proprietary and confidential. At the same time, public agencies are also subject to transparency requirements. Within the State of Washington, agencies are subject to a strongly transparent Public Records Act, the state's freedom of information law, under which members of the public are granted access to a large share of government information by request. Public agencies also seek access to firms' data to advance accountability, equity, and oversight objectives. In both respects, data access is constrained in practice when firms assert it to be trade secret. Specifically, I analyze two public-private data sharing relationships as a site of contestation over data access and control.

Andre Esteva
Medical AI, Salesforce Research
Frontiers of Medical AI: Therapeutics and Workflows
As the artificial intelligence and deep learning revolutions have swept over a number of industries, medicine has stood out as a prime area for beneficial innovation. The maturation of key areas of AI – computer vision, natural language processing, etc. – have led to their successive adoption in certain application areas of medicine. The field has seen thousands of researchers and companies begin pioneering new and creative ways of benefiting healthcare with AI. Here we’ll discuss two vitally important areas – therapeutics, and workflows.

Amy B.Z. Zhang
Cornell Tech
Personalized Recommender Systems: Technological Impact and Concerns
Most of our online activities are, at least in part, powered by personalized recommender systems. While automatic pattern extraction as a technology holds great promise, it can also have alarming adverse impacts. This talk will give a high-level overview of common techniques for personalized recommender systems, and how they connect to problems on both the personal and social level. It will also discuss some alternative approaches addressing these issues, and why a solution cannot come from technology alone.

Salomé Viljoen
Cornell Tech | NYU
The Great Regulatory Dodge
This talk will feature some preliminary thoughts regarding how the current regulatory paradigm in privacy law enables digital technology companies to “dodge” privacy regulations with which other companies offering similar services must comply, and exploring how this “dodge” results in unfair rules for companies and undermines privacy protection for people. Analyzing how the current privacy regime facilitates the dodge is important for diagnosing the shortcomings of existing laws, for revealing harmful effects on individuals and social institutions, and for developing effective alternatives. Such diagnosis gains urgency in light of the growing scholarly and policymaker consensus around privacy law reform.

Ero Balsa
Cornell Tech
Privacy Engineering Through Obfuscation
Privacy engineering seeks to provide tools and methods to design privacy-preserving systems or patch privacy invasive ones. Obfuscation is one of the essential tools in the privacy engineering toolkit. But what can we learn from the plethora of methods and techniques that one may categorize as obfuscation? What can we learn from the role obfuscation plays in privacy engineering? In this talk, Ero Balsa will provide an overview of the two main reasons why privacy engineers resort to obfuscation: to enable people to protect themselves against unnecessarily privacy-invasive systems, and to modulate the level of exposure that providing utility to untrusted parties requires.

Renée DiResta
Stanford Internet Observatory
From Civics to COVID: Dynamics of Misinformation
From July to January of 2020, Stanford Internet Observatory researchers worked with a coalition of researchers, government entities, tech companies, and civil society organizations in a multi-stakeholder partnership called the Election Integrity Partnership (EIP). Its mission was to rapidly detect high-velocity and potentially impactful false and misleading narratives related to voting. This talk will discuss findings from the partnership: how incidents became narratives, the rise of bottom-up misinformation, the dynamics of repeat spreaders, and the way in which platform policies shape message propagation.

Yan Ji
Cornell Tech
Proof of Liabilities
Cryptographic proof of liabilities (PoL) is a cryptographic primitive to prove the size of funds a bank owes to its customers in a decentralized manner and can be used for solvency audits with better privacy guarantees. Most PoL schemes follow the same principle, i.e., a prover aggregates all of the user balances and enables users to verify balance inclusion in the reported total amount. This process is probabilistic and the more the users who verify inclusion the better the guarantee of a non-cheating prover. In this presentation, Yan Ji introduces generalized PoL, which was originally proposed for proving financial solvency, by extending the state-of-the-art PoL scheme with extra privacy features, and making it applicable to domains outside finance, including transparent and private donations, new algorithms for disapproval voting and negative reviews, and publicly verifiable COVID-19 cases.

Ari Waldman
Northeastern University
Misinformation and the Conservative as Victim
This is an early-stage project about misinformation. While legal scholars have been active over the last 4 years identifying legal definitions of and developing legal responses to the problem of misinformation, including assessing the constitutionality of those responses under the current Supreme Court's First Amendment jurisprudence, less attention has been paid to how the law is already changing as a result of misinformation and how current legal doctrines and institutions are vulnerable to erosion because of misinformation already in the mix. This project brings together literature in sociology and social network theory about how information spreads and doctrinal standards used in judicial review of government action.

Anthony Poon
Cornell Tech
Thinking Backwards from Improvement in Information Technology Action Research
In information technology for development and related fields, action-oriented researchers aim to design and evaluate how technology can be used to improve the lives of underserved populations around the globe. However, improvement is a value-laden concept with normative, causal, and methodological assumptions. There are many alternative definitions that can be difficult to engage with and tempting for an action researcher to ignore. However, these definitions can heavily influence the direction, design, and evaluation of such work. In this presentation, Anthony Poon discusses some potential perspectives on improvement, including human development, empowerment, and post-development, and how they have influenced some of his past and current work.

John W. Etchemendy (Moderator)
Stanford University
Debate: "Does AI Pose an Existential Threat to Humanity?"
DLI's inaugural debate was inspired by thinking through the provocations posed by the impact of ‘intelligent’ technologies on the future of human life. Will robots take over the planet? Will they undermine or erode what it means to be human in other more subtle or unanticipated ways? Is the preoccupation with intelligent machines a red herring? Or is the biggest threat posed by intelligent machines the affordances they provide to the humans who wield them?

Julia Stoyanovich
New York University
The Unbearable Lightness of Teaching Responsible Data Science
Although an increasing number of ethical data science and AI courses is available, pedagogical approaches used in these courses rely exclusively on texts rather than on algorithmic development or data analysis. Technical students often consider these courses unimportant and a distraction from the “real” material. To develop instructional materials and methodologies that are thoughtful and engaging, we must strive for balance: between texts and coding, between critique and solution, and between cutting-edge research and practical applicability. In this talk, Julia Stoyanovich will discuss responsible data science courses that she has been developing and teaching to technical students at New York University since 2019, and will also speak to some ongoing work on teaching responsible data science to members of the public in a peer learning setting.

Mary Flanagan
Dartmouth College
Games as Social Transformation
Can games make the world a better place? Is it possible that we use games to make a difference in global challenges such as climate change or public health? Can we reduce societal biases, or encourage people to intervene in situations of danger, such as sexual assault? And how do we know the games are doing what they set out to do?

Robin Berjon & Ido Sivan-Sevilla
New York Times | Cornell Tech
AdTech & Our Privacy – Dark present, brighter future?
This joint session is about the digital advertising ecosystem: we highlight some of its disturbing practices against users’ privacy, explain the puzzle of lack of GDPR enforcement over its clear data protection violations, provide a glimpse on how a major publisher with a significant ad operation, The New York Times, has been trying to safeguard the privacy of its readers without forgoing revenue, and conclude by looking ahead at the current conversations in the web standards community on how to build an ad ecosystem without ubiquitous tracking.

Samar Sabie
Cornell Tech
Is Unmaking Design?
Design does more than supply the market with new products and services; it can raise provocations, critique existing socio-technical arrangements, seed conversations around matters of concern, and imagine radical alternatives. However, even when design is used as a critical provocation or political contestation, the focus is often on ‘making’ something new - a product, interface or artifact. That is because ‘unmaking’, a natural aspect of the designerly transformations always underway in the worlds around us, remains invisibilized and rarely theorized as its own explicit and intentional strategy.

Congzheng Song
Cornell Tech
Measuring the Unmeasured: New Threats to Machine Learning Systems
Machine learning (ML) is at the core of many Internet services and operates on users’ personal information. The deciding metric for deploying ML models is often test performance, which measures if the models learned the given task well. Test performance, however, does not measure other important properties of ML models such as security vulnerabilities, privacy leakage and compliance with regulations.

A Day of Reflection
In light of the US election, there will be no Digital Life Seminar scheduled for today. We look forward to seeing you next week!

Gary Johnson, Molly Turner & Ren Yee
Panel Discussion
The Platform Insurgency: Does Urban Tech Have an Ethics Problem?
Much of urban tech exploits today’s most ethically-charged technologies and business practices—such as indiscriminate location tracking, facial recognition, and gig work to fundamentally reprogram how urban systems function. As these failures become clearer, and broader awareness of systemic injustice in society grows, how can the emerging field of urban tech clarify choices between right and wrong?

Joshua A. Tucker
New York University
The Truth About Fake News: Measuring Vulnerability to Fake News Online
How well can ordinary people do in identifying the veracity of news in real time? Using a unique research design that has involved crowdsourcing popular news articles from both mainstream and suspect news sources that have appeared in the past 24 hours to both ordinary citizens and professional fact checkers, Professor Tucker will report on the individual level characteristics of those likely to incorrectly identify false news stories as true, the results of interventions to attempt to reduce the prevalence of this behavior, and the prospects for crowdsourcing to serve as a viable means for identifying false news stories in real time.

Lee McGuigan
Cornell Tech | Digital Life Initiative
Design Choice: Mechanism Design’s Digital Drift
Mechanism design is a form of optimization developed in economic theory. It casts economists as institutional engineers, choosing an outcome and then arranging a set of market rules and conditions to achieve it. In this paper, Lee McGuigan, Jake Goldenfein, and Salome Viljoen argue that mechanism design, applied in algorithmic environments, has become a tool for producing information domination, distributing social costs in ways that benefit designers, and controlling and coordinating participants in multi-sided platforms.

Serge Egelman
International Computer Science Institute | University of California, Berkeley
Taking Responsibility for Someone Else's Code: Studying the Privacy Behaviors of Mobile Apps at Scale
Modern software development has embraced the concept of "code reuse," which is the practice of relying on third-party code to avoid "reinventing the wheel" (and rightly so). While this practice saves developers time and effort, it also creates liabilities: the resulting app may behave in ways that the app developer does not anticipate. This can cause very serious issues for privacy compliance: while an app developer did not write all of the code in their app, they are nonetheless responsible for it. In this talk, I will present research that my group has conducted to automatically examine the privacy behaviors of mobile apps vis-à-vis their compliance with privacy regulations.

Emma Pierson
Microsoft Research | Jacobs Institute/Cornell Tech (2021)
Modeling COVID with mobility data to understand inequality and guide reopening
In this paper, we develop a model of COVID spread that uses dynamic mobility networks, derived from US cell phone data, to capture the hourly movements of millions of people from local neighborhoods (census block groups, or CBGs) to points of interest (POIs) such as restaurants, grocery stores, or religious establishments.

Cory Doctorow
Author, Activist, Journalist and Blogger
Oligarchy and Technology
Software has eaten the world and crapped out a dystopia: a place where Abbot Labs uses copyright claims to stop people with diabetes from taking control over their insulin dispensing and where BMW is providing seat-heaters as an-over-the-air upgrade that you have to pay for by the month. Companies have tried this stuff since the year dot, but Thomas Edison couldn't send a patent enforcer to your house to make sure you honored the license agreement on your cylinder by only playing it on an Edison phonograph. Today, digital systems offer perfect enforcement for the pettiest, greediest grifts imaginable.

Joseph Turow
Annenberg School for Communication | University of Pennsylvania
Seductive Surveillance and Social Change: The Rise of the Voice Intelligence Industry
Drawing from my forthcoming book The Voice Catchers (Yale U Press, early 2021), I pose two key questions about this new development in the United States: How has the voice intelligence industry been able to gain the kind of social traction that has tens of millions of people giving their up voiceprints to so-called “intelligent assistants”? And in the face of this widespread shift to voice bio-profiling, what social policies should concerned citizens advocate to slow the process and implement regulations regarding this new form of surveillance?

Yaël Eisenstat & Carrie Goldberg
Digital Life Initiative | C.A. Goldberg, PLLC
"With Great Power Comes... No Responsibility?"
Who bears responsibility for the real-world consequences of technology? This question has been unduly complicated for decades by the 1996 legislation that provides immunity from liability to platforms that host third-party content: Section 230 of the Communications Decency Act.

MC Forelle
Cornell Tech
When the Software Rubber Hits the Mechanical Road: Regulating the Repair and Modification of the Modern Car
What happens when two different technologies, historically governed by different regulatory regimes, are combined into a single, hybrid, consumer device?

Omid Poursaeed
Cornell Tech
Deepfakes and Adversarial Examples: Fooling Humans and Machines
In this talk, Omid Pouraseed will discuss recent methods for adversarial data manipulation, and mention possible defense strategies against them. Although manipulations of visual and auditory media are as old as media themselves, the recent advent of deepfakes has marked a turning point in the creation of fake content.

Madelyn R. Sanfilippo & Yan Shvartzshnaider
Princeton University | New York University
Privacy/Disaster: When Information Flows Are Taken Out of Context
Privacy is contextual. Everyday, we manage different contexts and adjust our privacy expectations accordingly. The theory of Contextual Integrity offers a way to capture contextual norms and a heuristic to analyze privacy. This analysis is especially helpful to detect situations in which the system designers take advantage of well-established, contextual privacy expectations, to encourage user disclosures without adhering to governing norms. For example, imagine an app that is marked to you as a patient/doctor communication tool in a medical context, yet it is actually an insurance company trying to get more information on you.

Diana Freed
Cornell Tech
Improving the Privacy and Safety for Survivors of Intimate Partner Violence
Diana will present her research on technology-mediated abuse in IPV, threat models, and recent work from Cornell Tech's Intimate Partner Violence clinic.

Frank Pasquale
University of Maryland
Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria
Over the past decade, algorithmic accountability has become an important concern for social scientists, computer scientists, journalists, and lawyers.

Chris Sagers
Cleveland State University
United States v Apple: Competition in America
United States v. Apple: Competition in America examines the misunderstandings and exaggerations that firms have raised throughout antitrust history to justify collusion and monopoly.

Hongyi Wen
Cornell Tech
Recommender Systems with Users in the Loop
Recommender systems have come to serve as the “homepage” for users to access informational items such videos, music, books, etc.

Zachary Chase Lipton
Carnegie Mellon University
Fairness & Interpretability in Machine Learning and the Dangers of Solutionism
Supervised learning algorithms are increasingly operationalized in real-world decision-making systems. Unfortunately, the nature and desiderata of real-world tasks rarely fit neatly into the supervised learning contract.

Kate Klonick
St. John's University Law School
Facebook's Oversight Board
For a decade and a half, Facebook has dominated the landscape of digital social networks and has evolved to become one of the most powerful arbiters of online speech.

Eugene Bagdasaryan
Cornell Tech
Evaluating Privacy Preserving Techniques in Machine Learning
Modern applications frequently require access to sensitive data, such as facial images, typing history, or health records, thereby increasing the need for expressive privacy protection.

Tal Zarsky
University of Haifa
When a Small Change Makes a Big Difference
A growing body of scholarship is addressing the risks of opaque analyses as well as the fear of hidden biases and discrimination that may come along with automated decision-making.

Michael Sobolev
Cornell Tech
Behavioral Science in the Digital Economy
Over the last decade, behavioral science made significant progress and impact in academic research as well as impacted policy in commercial organizations and governments. At the same time, the rise of digital technologies and the digital economy provides exciting opportunities and presents challenges for the next decade of behavioral science. In this talk, Sobolev will explore novel avenues for behavioral science research in the digital economy.

Kashmir Hill
The New York Times
Losing Face: The Privacy Challenges as Facial Recognition Goes Mainstream
Hill will discuss the ethics of building facial recognition databases that use the faces of people who have not consented to taking part.

Yiqing Hua
Cornell Tech
Understanding Adversarial Interactions Against Politicians on Social Media

J Nathan Matias
Cornell University
Advancing Flourishing Digital Societies through Citizen Science

Lee McGuigan
Cornell Tech
Dreams and Designs to Optimize Advertising

James Grimmelmann
Cornell Tech
Spyware vs Spyware

Ben Fish
Microsoft Research
Relational Equality: Modeling Unfairness in Hiring via Social Standing

Niva Elkin-Koren
University of Haifa
Contesting Algorithms

Sorelle Friedler
Haverford College
Fairness in Networks: Understanding Disadvantage and Information Access

Salome Viljoen & Ben Green
Cornell Tech | Harvard University
Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought

Kiel Brennan-Marquez, Karen Levy, & Daniel Susser
UConn School of Law | Cornell University | Pennsylvania State University
Strange Loops: Apparent vs Actual Involvement in Automated Decision-Making

Ido Sivan-Sevilla
Cornell Tech
Complementaries and Contradictions: National Security and Privacy Risks in US Federal Policy, 1968-2018

Kathleen R McKeown
Columbia University
Where Natural Language Processing Meets Societal Needs

Alondra Nelson
SSRC, Institute for Advanced Study
"I am Large, I Contain Multitudes"

Jake Goldenfein
Cornell Tech
Private Companies and Scholarly Infrastructure: The Question of Google Scholar

Ifeoma Ajunwa
Cornell University
The Paradox of Automation as Anti-Bias Intervention

Doug Rushkoff
CUNY Queens
Team Human: Optimizing Technology for Human Beings (Instead of the Other Way Around)

Sunny Consolvo
Studies of Privacy-, Security-, and Abuse-Related Beliefs and Practices

Maggie Jack
Cornell Tech
Localization of Transnational Tech Platforms and Liminal Privacy Practices in Cambodia

Isabelle Zaugg
Columbia University
Precarity and Hope for Digitally-Disadvantaged Languages (And Their Scripts)

Jessica Vitak
University of Maryland
Privacy, Security, and Ethical Challenges in the Era of Big Data

Angela Zhou
Cornell Tech
Towards an Ecology of Care for Data-Driven Decision Making

Moran Yemini
Yale University
The New Irony of Free Speech

Lauren van Haaften-Schick
Cornell University
"The Artist's Contract" (1971) to Smart Contracts: Remedies for Inequity in the Art Market in Historical Perspective

David Pozen
Columbia Law School
Loyal to Whom? A Skeptical View of Information Fiduciaries

Brad Smith
Microsoft
Facial Recognition: Coming to a Street Corner Near You

Luke Stark
Microsoft Research
Darwin's Animoji: Histories of Emotion, Animation, and Racism in Everyday Facial Recognition

Elizabeth O'Neill
Cornell Tech
The Ethics of Artificial Ethics Advisors

Finn Brunton
NYU Steinhardt
Digital Cash: The Unknown History of the Utopians, Anarchists, and Technologists Who Built Cryptocurrency

Fabian Okeke
Cornell Tech
Privacy and Equity in Developing Countries

Laura Forlano
Cornell Tech
Techno-Optimistic Smart City Imaginaries: A Patchwork of Four Urban Futures

Timnit Gebru
Google AI
Understanding the Limitations of AI: When Algorithms Fail

Nirvan Tyagi
Cornell Tech
Survey of Security and Privacy Concerns in Machine Learning

Joseph Reagle
Northeastern University
The Digital Complicity of Facebook's Growth Hackers & Chip-Implanting Biohackers

David Robinson
Upturn, Cornell University
Kidney Allocation Policy as Algorithmic Governance

Francesca Rossi
IBM AI Ethics Global Leader
Ethically Bounded AI

Eran Toch
Cornell Tech
Smart Cities/Digital Neighborhoods: Privacy, Equality, and Adoption of Urban Technologies
The idea of the Smart City is becoming central to the adoption of technologies that enhance and regulate urban spaces. At the same time, smart cities bring about new challenges, as diverse populations interact with an array of new technologies, most of them are based on large-scale data collection and with increasing effects on residents lives. In this talk, we will discuss the emerging impact urban technologies have on cities. For instance, how would urban technologies impact urban inequality?

Glen Weyl
Microsoft Research
Liberal Radicalism for the Digital Age
The present architecture of the digital economy is leading to unprecedented concentrations of economic and political power.

Jake Goldenfein
Cornell Tech
The Profiling Potential of Computer Vision
Over the past decade, researchers have been investigating new technologies for categorizing people based on physical attributes alone.

Elana Zeide
Seton Hall Law School
How Ed Tech Shapes Pedagogy & Policy (& Vice Versa)

David Bernet
Film Screening: Democracy
This documentary unveils the political intrigue in the struggle for new data protection legislation in the EU.

Rishab Nithyanand
Data & Society
Anonymity & Ad Tracking: Insights from Measurement Studies

Kadija Ferryman
Data & Society
Precision Medicine & Fairness

Yafit Lev-Aretz
New York University
Corporate Data & Public Ends

Beth Simone Noveck
GovLab
CrowdLaw: Experiments in Participatory Urban Lawmaking

Natasha Dow Schull
NYU Steinhardt
Wearable Technology as Self Regulation

Solon Barocas
Cornell University
Allocative Versus Representational Harms in Machine Learning

Nicki Dell, Tom Ristenpart, & Karen Levy
Cornell Tech | Cornell University
Digital Safety and Security for Victims of Intimate Partner Violence

Trebor Scholz
The New School
This is What a Democratic Platform Economy Loosk Like

Matt Jones & Chris Wiggins
Columbia University | New York Times
Data Literacy & Ethics in the Lab

Amanda Levendowski
NYU School of Law
Artificial Intelligence, Bias & Copyright Complications

Roy Cohen
Cornell Tech
Film Screening: Machine of Human Dreams
To launch the Digital Life Seminar series, we are lucky to have Cornell Tech's Roy Cohen present his feature-length documentary, Machine of Human Dreams, about AI entrepreneur Ben Goertzel and his quest towards building the first thinking machine.