Web
Analytics
DLI | Seminar Series

 

Upcoming Seminars

Thursday 21 NOVEMBER | 2019

Guest Speaker

J. Nathan Matias | Cornell University

Title

Advancing Flourishing Digital Societies through Citizen Science

Abstract

As the public ask questions about the power of technology in society and wonder how to create change, many people mistrust the technology industry to develop answers. How can we advance a world where digital power is guided by evidence and accountable to the public?

 

Citizen science currently plays a central role in the governance of complex systems involving powerful corporate actors, including the environment, product safety, and consumer protection. In this conversation, J. Nathan Matias will describe how citizen scientists collect data and test causal questions for change in our digital lives–on topics including harassment, pro-social behavior, misinformation, and algorithmic accountability. Designing citizen behavioral science requires advances in computer science and can lead to fundamental discoveries about human and algorithm behavior. Nathan will also present the work of the Citizens and Technology Lab at Cornell University (formerly CivilServant), which works alongside communities of tens of millions of people in citizen behavioral science for a flourishing internet. More info >

Thursday 5 DECEMBER | 2019

Speaker

Yiqing Hua | Cornell Tech 

Title

Understanding Adversarial Interactions Against Politicians on Social Media

Abstract

Adversarial interactions against politicians on social media such as Twitter have significant impact on society, and in particular both discourage people from seeking office and disrupt substantive political discussions online. There are many questions to be answered regarding this issue. Do politicians of certain gender or party affiliation receive more adversarial interactions? What are the unique challenges in automatically detecting adversarial interactions against political figures? Do users who engage in adversarial interactions exhibit different characteristics than those who don’t? In this talk, I will present our findings from a study conducted on a dataset of 400 thousand users' 1.2 million replies to the 756 candidates for the U.S. House of Representatives in the two months leading up to the 2018 midterm election on Twitter. 

 

Previous Seminars

Thursday 14 NOVEMBER | 2019

Speaker

Lee McGuigan | Digital Life Initiative, Cornell Tech 

Title

Dreams and Designs to Optimize Advertising

Abstract​

This talk is about the history of a future imagined by advertisers as they constructed the affordances of digital technologies. It examines how related efforts to predict and influence consumer habits and to package and sell audience attention helped orchestrate the marriage of behavioral science and data extraction that defines media and marketing environments today. Across the second half of the twentieth century, the advertising industry reconstructed its information infrastructures around an ambition to account for, calculate, and influence more of public and personal life. This transformation sits at an intersection where the processing of data, the processing of commerce, and the processing of culture collide. The talk will introduce a few key efforts to reorient advertising around a science of optimization, focusing on early designs for hypertargeting—or producing an audience of one. 

 

Seminar Poster >

Seminar Reflections >

Thursday 7 NOVEMBER | 2019

Speaker

James Grimmelmann | Cornell Tech | Cornell Law School

Title

Spyware vs. Spyware

Abstract

Antivirus software blocks malware; malware disables antivirus software. Cookie blockers delete cookies; websites reinstall them.  Applications install new system components; operating system updates delete them. In numerous cases, one program deliberately interferes with another.  These program-vs-program cases challenge the legal system to find a coherent way of distinguishing dangerous programs that harm users and make computers unusable from the valuable programs users rely on to keep them safe from the dangerous ones. Any such distinction, however, must rest on an underlying theory of user autonomy. Looking closely at program-vs-program conflicts can help us understand where current legal theories of what users want go wrong.

 

Seminar Poster 

Seminar Reflections >

Thursday 31 OCTOBER | 2019

Guest Speaker

Ben Fish | Microsoft Research, Montreal

Title

Relational Equality: Modeling Unfairness in Hiring via Social Standing

Abstract

Much recent work in machine learning has centered on formulating computational definitions of human values such as fairness. These translational efforts have largely focused on conceptualizing values easy to formalize, even if these resulting conceptualizations are narrow ones. In the literature on algorithmic fairness, for example, extant work has focused largely on distributional notions of equality, where equality is defined by the resources or decisions given to individuals in the system. On the other hand, it is not necessarily clear how to create computational definitions formalizing other notions of equality or fairness.  One popular alternative has been proposed by the political philosopher Elizabeth Anderson, who focuses on the notion that equal social relations are central to human equality and fairness. In this work, we propose this relational equality as a viable alternative to extant definitions of algorithmic fairness in a hiring market. Key to doing so is being able to model social standing amongst individuals in computational models, so we focus on creating a computational definition of social standing. 

 

Seminar Poster >

Seminar Reflections >

Thursday 24 OCTOBER | 2019

Guest Speaker

Niva Elkin-Koren | University of Haifa | Berkman Klein Center, Harvard University

Title

Contesting Algorithms

Abstract

Social media platforms are responsible for mediating public discourse online. Increasingly platforms are using Artificial Intelligence and machine learning (AI) to perform content moderation (e.g., matching users and content, adjudicating conflicting claims, or detecting unwarranted content). In a digital ecosystem governed by AI, we currently lack sufficient safeguards against the blocking of legitimate content. Moreover, we lack a space for negotiating meaning and for deliberating the legitimacy of particular speech. In this presentation, Elkin-Koren proposes to address AI-based content moderation by introducing an adversarial procedure. Algorithmic content moderation often seeks to optimize a single goal, such as removing copyright infringing materials as defined by right holders, or blocking hate speech. Meanwhile, other values of the public interest, such as fair use, or free speech, are often neglected. Contesting Algorithms introduce an adversarial design, which reflects conflicting interests, and thereby, offers a check on dominant removal systems. The presentation will introduce the strategy of Contesting Algorithms, discuss its promises and limitations, and demonstrate how regulatory measures could promote the development and implementation of this strategy in online content moderation.

 

Seminar Poster >

Seminar Reflections >

Thursday 17 OCTOBER | 2019

Guest ​Speaker

Sorelle Friedler | Haverford College

Title

Fairness in Networks: Understanding Disadvantage and Information Access

Abstract

What does fairness mean within a social network? Access to information spread through a network can mean knowledge of jobs, public health information, or even public safety alerts. Sorelle Friedler will consider who has access to information flowing through a network, how to define fairness in this context, and what interventions can be made to ensure more equal access to information. 

 

Seminar Poster >

Seminar Reflections >

Thursday 10 OCTOBER | 2019

Speakers

Salome Viljoen | Digital Life Initiative, Cornell Tech 

Ben Green​ | Harvard University | AI Now 

Title

Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought

Abstract

The field of computer science is in a bind: on the one hand, computer scientists are increasingly eager to address social challenges; on the other, the field faces a growing awareness that many well-intentioned applications of algorithms in social contexts have led to significant harm. We argue that productively moving through this bind requires developing new practical reasoning methods for those engaged in algorithmic work. To understand what such an intervention looks like and what it may achieve, we look to the twentieth century evolution in American legal thought from legal formalism to legal realism. Drawing on the lessons of legal realism, we propose a new mode of algorithmic thinking— "algorithmic realism”—that is attentive to the internal limits of algorithms as well as the social concerns that fall beyond the bounds of current algorithmic thinking. Algorithmic realism is a practical orientation to work, and thus will not on its own prevent every harmful impact of algorithms. Nevertheless, it will better equip engineers to reason about the sociality of their work, and provide a necessary first step toward reducing algorithmic harms.

Seminar Poster >

Seminar Reflections >

Thursday 3 OCTOBER | 2019

Guest Speakers

Kiel Brennan-Marquez | UConn School of Law

Karen Levy Cornell University 

Daniel Susser | Pennsylvania State University

Title

Strange Loops: Apparent vs. Actual Involvement in Automated Decision-Making

Abstract 

The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority?

 

Seminar Poster >

Read the Paper > 

Thursday 26 SEPTEMBER | 2019

Speaker

Ido Sivan-Sevilla | Digital Life Initiative, Cornell Tech 

Title

Complementaries and Contradictions: National Security and Privacy Risks in U.S. Federal Policy, 1968–2018

Abstract

How does the U.S. balance privacy with national security? This study analyzes how the three regulatory regimes of information collection for (1) criminal investigations, (2) foreign intelligence gathering, and (3) cybersecurity have balanced privacy with national security over a 50-year period. A longitudinal, arena-based analysis is conducted of policies (N=63) introduced between 1968 and 2018 to determine how policy processes harm, compromise, or complement privacy and national security. The study considers the roles of context, process, actor variance, and commercial interests in these policy constructions. Analysis over time reveals that policy actors’ instrumental use of technological contexts and invocations of security crises and privacy scandals have influenced policy changes. Analysis across policy arenas shows that actor variance and levels of transparency in the process shape policy outcomes and highlights the conflicting roles of commercial interests in favor of and in opposition to privacy safeguards. While the existing literature does address these relationships, it mostly focuses on one of the three regulatory regimes over a limited period. Considering these regimes together, the article uses a comparative process-tracing analysis to show how and explain why policy processes dynamically construct different kinds of security-privacy relationships across time and space.

 

Seminar Poster >

Seminar Reflections >

Thursday 19 SEPTEMBER | 2019

Guest Speaker

Kathleen R. McKeown | Columbia University

Title

Where Natural Language Processing Meets Societal Needs

Abstract

The large amount of language available online today makes it possible to think about how to learn from this language to help address needs faced by society. In this talk, McKeown will describe research in our group on summarization and social media analysis that addresses several different challenges. They have developed approaches that can be used to help people live and work in today’s global world, approaches to help determine where problems lie following a disaster, and approaches to identify when the social media posts of gang-involved youth in Chicago express either aggression or loss.

 

Seminar Poster >

Seminar Reflections >

Thursday 12 SEPTEMBER | 2019

Guest Speaker

Alondra Nelson | SSRC and Institute for Advanced Study

Title

“I am Large, I Contain Multitudes”

Abstract

Direct-to-consumer genetic testing was supposed to augur the ultimate threshold of the quantified self: one’s personal (and personal family) information, for your eyes only, derived through a private consumer transaction. However, it has always been the case that genetic data can be exploited in many domains, regardless of its original provenance and intended use. This has been born out more recently in the use of direct-to-consumer genetic ancestry testing in the domains of the criminal justice and healthcare systems. In exploring the transitive quality of DNA, this talk will suggest some of the social, political and regulatory issues raised by the circulation of genetic data. 

 

Seminar Poster >

Seminar Reflections >

Thursday 5 SEPTEMBER | 2019

Speaker

Jake Goldenfein | Digital Life Initiative, Cornell Tech 

Title

Private Companies and Scholarly Infrastructure – The Question of Google Scholar

Abstract

Google Scholar has become an important piece of academic infrastructure. Not only is it used in searching for academic publications, but its bibliometric system has become critical in the evaluation of scholars for hiring and funding. Its success stems from the fact that Google Scholar appears to organize scholarly information better than scholars have managed to so themselves, and its usability is dramatically better than other academic search and bibliometric services. There have been numerous studies of Google Scholar that explore how the technology actually works (i.e. how it determines academic ‘relevance’ or ‘scholarliness’); what types of work, repositories, and authors it privileges or marginalizes; how comprehensive its citations are; and the different ways bibliometrics systems have been taken up by researchers and for the evaluation of publications, scholars, journals, and universities. However, we take a different approach and analyze the political and ethical dimensions of infrastructural shift into a corporate platform operating on the basis of different commercial logics, with limited transparency or accountability.

 

Seminar Poster >

Seminar Reflections >

Thursday 02 MAY | 2019

Guest Speaker

Ifeoma Ajunwa | School of Industrial and Labor Relations | Cornell University

Title

The Paradox of Automation as Anti-Bias Intervention

Abstract

A received wisdom is that automated decision-making serves as an anti-bias intervention. The conceit is that removing humans from the decision-making process will also eliminate human bias. The paradox, however, is that in some instances, automated decision-making has served to replicate and amplify bias. In this presentation, Dr. Ajunwa will use the case study of algorithmic capture of hiring as a heuristic device to provide a taxonomy of problematic features associated with algorithmic decision-making as an anti-bias intervention, arguing that those features are at odds with the fundamental principle of equal opportunity employment. To examine these features and explore potential legal approaches for rectifying them, Dr. Ajunwa brings together two streams of legal scholarship: law and technology studies and employment & labor law. 

Seminar Poster >

Seminar Reflections >

Thursday 25 APRIL | 2019

Guest Speaker

Doug Rushkoff | Author, and Professor of Media Theory and Digital Economics at CUNY/Queens

Title

Team Human: Optimizing Technology for Human Beings (instead of the other way around).

Abstract

There is an anti-human agenda embedded in our markets and technologies, which has turned them from means of human connection into ones of isolation and repression. Our corporations and the culture they create glorify individualism at the expense of cooperation, threatening the sustainability not just of our economy but our species. In this talk, Rushkoff will reveal this agenda at work and invite us to remake society toward human ends rather than the end of humans. He will discuss how the intentional repression of humanity impacts diverse sectors of society, explaining how money went from being a means of transaction to a means of extraction and how education transformed from the ideal of learning into an extension of occupational training. Digital age technologies have only amplified these trends, making our systems more brittle and presenting the greatest challenges yet to our collective autonomy: robots taking our jobs, algorithms directing our attention, and social media influencing our votes. Rushkoff will argue that there’s still time to think before we hit the switch and automate ourselves out of existence. We must reconnect with our essentially social nature, assert a place for humans in the emerging landscape, and forge solidarity with the others who understand that being human is team sport. 

Seminar Poster >

Seminar Reflections >

Thursday 18 APRIL | 2019

Guest ​​Speaker

Sunny Consolvo | User Experience Researcher, Google

 

Title

Studies of Privacy-, Security-, and Abuse-Related Beliefs and Practices 

Abstract

This talk will present results of several exploratory studies of the privacy-, security-, and abuse-related beliefs and practices for people in under-represented populations. We will explore some of the digital privacy and security challenges of people living in transitional homeless shelters, performative practices employed by women in South Asia to maintain individuality and privacy despite the frequent borrowing and monitoring of their phones by their families and social relations, online abuse experiences and coping practices of women in South Asia, and digital privacy and security motivations, practices, and challenges of survivors of intimate partner abuse. 

 

Seminar Poster >

Seminar Reflections >

Thursday 11 APRIL | 2019

Speaker

Maggie Jack | Cornell University

Collaborators

Nicola Dell | Cornell Tech

Pang Sovannaroth | IIC Technology University, Phnom Penh

Title

Localization of Transnational Tech Platforms and Liminal Privacy Practices in Cambodia

Abstract

Privacy scholarship has shown how norms of appropriate information flow and information regulatory processes vary according to environment which change as the environment changes, including through the introduction of new technologies. This talk gives the findings from a qualitative research study that examines practices and perceptions of privacy in Cambodia as the population rapidly moves into an online environment (specifically Facebook, the most popular digital platform in Cambodia). We empirically demonstrate how the concept of privacy differs across cultures and show how the Facebook platform, as it becomes popular worldwide, catalyzes change in this fluid concept of privacy through the functions that it builds into the design of its tool. We discuss how the localization of transnational technology platforms provides a key site in which to investigate changing cultural ideas about privacy. We use this case to explore tensions between the ways that digital tools change culture, while also being localized themselves through their integration into specific milieu. Finally, we explore ways that insufficient localization effort by transnational technology companies puts some of the most marginalized users at disproportionate privacy risk when using new technology tools, and offer some pragmatic suggestions for how such companies could improve privacy settings for their global user base. 

 

Seminar Poster >

Seminar Reflections >

Thursday 28 MARCH | 2019

Guest Speaker

Isabelle Zaugg | Institute for Comparative Literature and Society, Columbia University

Title

Precarity and hope for digitally-disadvantaged languages (and their scripts)

Abstract

Minority and indigenous languages and scripts currently face unprecedented rates of extinction, and digital technologies appear to be contributing to their decline.  Scholars predict 50-90% of languages will become extinct this century, while only 5% of languages will attain digital vitality.  The lack of basic digital supports like Unicode encoding, fonts, or keyboards make using digitally-disadvantaged languages on digital devices inconvenient or impossible.  Language communities see declines in the use of their language and/or script as people turn to better-supported languages like English, or transliterate their own language into the Latin alphabet, when using devices.  Identity and ways of knowing are embedded within languages and scripts, and so their loss impacts both the most vulnerable communities globally as well as humankind as a whole.  While digital support for language diversity is increasing, will it be too little too late?  This presentation investigates what can be done to close this digital divide through an instrumental case study of Unicode inclusion and the development of supports for the Ethiopic script and its languages, including Ethiopia's national language, Amharic. This presentation concludes with recommendations to strengthen support for digitally-disadvantaged languages and scripts, from inclusion in Unicode, to grassroots coding within and on behalf of digitally-disadvantaged language communities, to advancing the idea that supporting linguistic diversity is Silicon Valley’s corporate social responsibility. 

 

Seminar Poster >

Seminar Reflections >

Thursday 21 MARCH | 2019

Guest Speaker

Jessica Vitak | iSchool | University of Maryland

Title

Privacy, security, and ethical challenges in the era of big data

 

Abstract

Over the past decade, the Internet of Things has pushed its way into our workplaces and homes by making regular products "smarter." We now wear watches to track our steps, heart rate, and sleep patterns. Our thermostats learn over time about our heating and cooling preferences. Our refrigerator can detect when we run out of milk. And our intelligent personal assistants passively listen for a voice cue ("Alexa!") to respond to our questions and commands. In many ways, we are living in the science fiction future we dreamed of decades ago. On the other hand, the influx of devices meant to collect constant data about our movement and location, health, and purchasing patterns raises significant questions about the privacy and security of that data. In this talk, Jessica will share early results from two NSF grants, one looking at privacy and surveillance on smartphones and intelligent personal assistants like Siri and Alexa, and the other collaborative project on pervasive data ethics. She'll also raise questions for researchers working in this space to consider as they work with large, public datasets to ensure they are taking adequate steps to protect the data and the users behind that data. 

 

Seminar Poster >

Seminar Reflections >

Thursday 14 MARCH | 2019

​Speaker

Angela Zhou | DLI Doctoral Follow, Cornell Tech

Title

Towards an Ecology of Care for Data-Driven Decision-Making

Abstract

The empirical success of machine learning and data science for making sense of otherwise un-operationalizable corpora of text, image, and rich individual- and transaction-level data would seem to suggest opportunities for improving operational decisions at large. However, institutions and individuals attempting to leverage their own data to improve decision-making typically operate in vastly different settings that challenge usual convenience assumptions. At the same time, decision-making settings that arise in business, healthcare, and policy, often revolve around data gathered from people, for which model-building that appeals to simple mechanisms or intuitive stories is inadequate.

 

In this talk, Zhou speculates on what it means to work towards “prescriptive validity” of learning from data to directly inform decisions: what principles contribute to robust and meaningful benefits in outcomes, when data is necessarily historical and limited? She will draw on recent work on learning from observational data and revisiting fairness assessment for decision support to illustrate the necessity of accounting for the lived environments from which data is collected, and upon which decisions will be deployed. Expanding the scope of attention in this way seeks to recognize the broader “ecology” of data-driven decision-making, while emphasizing the role of “care” in assuring prescriptive validity and robustness of any accordant insights.

 

Seminar Poster >

Seminar Reflections >

Thursday 07 MARCH | 2019

Speaker

Moran Yemini | Yale ISP | DLI Visiting Researcher 

Title

The New Irony of Free Speech

 

Abstract

In The Irony of Free Speech, published in 1996, Professor Owen Fiss argued that the traditional understanding of freedom of speech, as a shield from interference by the state, ended up fostering a system that benefited a small number of media corporations and other private actors, while silencing the many, who did not possess any comparable expressive capacity. Conventional wisdom is that by dramatically lowering the access barriers to speech, the Internet has provided a solution to the twentieth-century problem of expressive inequality identified by Fiss and others. As this Article will demonstrate, however, the digital age presents a new irony of free speech, whereby the very system of free expression that provides more expressive capacity to individuals than ever before also systematically diminishes their liberty to speak. The popular view of the Internet as the ultimate promoter of freedom of expression is, therefore, too simplistic. In reality, the Internet, in its current state, strengthens one aspect of freedom (the capacity aspect) while weakening another (the liberty aspect), trading liberty for capacity. This Article will explore the process through which expressive capacity has become a defining element of freedom in the digital ecosystem, at the expense of liberty. The process of diminishing liberty in the digital ecosystem occurs along six related dimensions: interference from multiple sources, state-encouraged private interference, multiple modes of interference, new-media concentration, lack of anonymity, and lack of inviolability. The result of these liberty-diminishing dimensions of our current system of free expression, taken together, is that while we may be able to speak more than ever before, it is doubtful that we are able to speak freely. 

 

Seminar Poster >

Seminar Reflections >

Thursday 28 FEBRUARY | 2019

Speaker

Lauren van Haaften-Schick | DLI Doctoral Follow, Cornell Tech

Title

The Artist’s Contract’ (1971) to Smart Contracts: Remedies for Inequity in the Art Market in Historical Perspective.

 

Abstract

In 1971 Conceptual art curator-publisher Seth Siegelaub and lawyer Robert Projansky created The Artist’s Reserved Rights Transfer and Sale Agreement, a legal tool enabling artists to retain property and economic rights in their sold works. To Siegelaub, the “Artist’s Contract” would function as a clear statement of an artist’s desired rights, and its efficacy was ensured by the legal technology of contract and its promise of seamless and just transactions – “A perfect waffle every time!” However, the Artist’s Contract has been little-used and remains controversial, particularly for its stipulation for an artist’s resale royalty, and its demand for transparency in resales. Today however, numerous blockchain-based platforms for selling art employ smart contracts with terms similar to the Artist’s Contract. Here, resale royalties are not taboo, but are encouraged and enforced through the “promise” of smart contracts to automate a “perfect” transaction every time. But rather than rush to celebrate these technological potentials, might we pause to reconsider the value of contracts as systems of relations, rather than processes of automation, as law and society scholars have urged? These developments encourage a comparative view to historical contract technologies as we assess the unfolding implications of smart contracts and blockchain within the art market. 

 

Seminar Poster >

Seminar Reflections >

Thursday 14 FEBRUARY | 2019

Guest Speaker

David Pozen | Columbia Law School

Title

Loyal to Whom? A Skeptical View of Information Fiduciaries

Abstract

The concept of “information fiduciaries” has surged to the forefront of debates on social media regulation. Developed by Professor Jack Balkin, the concept is meant to rebalance the relationship between ordinary individuals and the digital companies that accumulate, analyze, and sell their personal data for profit. Just as the law imposes special duties of care, confidentiality, and loyalty on doctors, lawyers, and accountants vis-à-vis their patients and clients, Balkin argues, so too should it impose special duties on companies such as Facebook, Google, Microsoft, and Twitter vis-à-vis their end users. Over the past several years, this argument has garnered remarkably broad support and essentially zero critical pushback.

 

This paper, co-authored with Lina Khan, seeks to disrupt the emerging consensus by identifying a number of lurking complications in the theory of information fiduciaries as well as a number of reasons to doubt the theory’s capacity to resolve them satisfactorily. Although we agree with Balkin that certain online platforms should be regulated more vigorously, we question whether the concept of information fiduciaries is an adequate or apt response to the problems of information asymmetry and abuse that he stresses, much less to more fundamental problems associated with outsized market power and practices of pervasive surveillance. We also call attention to the potential costs of adopting an information fiduciaries framework—a framework that, we fear, invites an enervating complacency toward social media companies’ core business models and a premature abandonment of more robust visions of public regulation.

Seminar Poster >

Seminar Reflections >

Wednesday 06 FEBRUARY | 2019*

Guest Speaker

Brad Smith | Microsoft's President and Chief Legal Officer 

Time & Venue

Gather for lunch | 12.30pm

Presentation | 12.45 - 1.45pm 

Tata Innovation Center | Room 141 

Presentation Title

Facial Recognition: Coming to a Street Corner Near You

Abstract

Facial recognition technology raises issues that go to the heart of fundamental human rights protections, like privacy and freedom of expression. As this technology advances and is adopted at a global pace, we need to ensure that it is used in a way that reflects and respects the values of the world we want to live in. In this talk, Microsoft President Brad Smith will discuss the challenges of facial recognition, especially around bias, privacy, and democratic freedoms. He will cover key questions such as:  What type of regulation is needed? How can facial recognition advance public safety while protecting civil liberties? What role should the technology, public and private sectors play in addressing these challenges? 

 

Seminar Poster >

Seminar Reflections >

Thursday 31 JANUARY | 2019

Guest Speaker

Luke Stark | Microsoft Research

Title

Darwin’s Animoji: Histories of Emotion, Animation, and Racism in Everyday Facial Recognition

Abstract

Facial recognition systems are increasingly common components of smartphones and other consumer digital devices. These technologies enable animated video-sharing applications, such as Apple’s animoji and memoji, Facebook Messenger’s masks and filters and Samsung’s AR Emoji. These animations serve as technical phenomena translating moments of affective and emotional expression into mediated, trackable, and socially legible forms. Through technical and historical analysis of these digital artifacts, the talk will explore the ways facial recognition systems classify and categorize racial identities in human faces in relation to emotional expression. Drawing on the longer history of discredited pseudosciences such as phrenology, the paper considers the dangers of both racializing logics as part of these systems of classification, and of how data regarding emotional expression gathered through these systems can be used to reinforce systems of oppression and discrimination. 

 

Seminar Poster >

Seminar Reflections >

Thursday 29 NOVEMBER | 2018

DLI Speaker

Elizabeth O'Neill | DLI Research Fellow

Title

The Ethics of Artificial Ethics Advisors 

Abstract

A number of philosophers and computer scientists have begun to seriously entertain the idea that AI systems capable of some form of ethical reasoning could be on the horizon. Such AI could come in the near-term in a form resembling current decision support systems or, in the more distant future, in the form of artificial general intelligence. Either form of AI could conceivably function as an ethics advisor, supplying recommendations and guidance to humans on day-to-day moral questions. Their advice might be grounded on an individual’s core values and normative beliefs or on other sources, such as information about shared societal norms and values. This talk examines some of the new types of ethical risks and questions that use of artificial ethics advisors might introduce. 

 

Seminar Poster >

Seminar Reflections >

Thursday 15 NOVEMBER | 2018

Guest

Finn Brunton | NYU Steinhardt

Title

Digital Cash: The Unknown History of the Utopians, Anarchists, and Technologists Who Built Cryptocurrency

Abstract

Bitcoin may seem to have suddenly appeared out of nowhere in 2009. In fact, it is only the best-known recent experiment in a long line of similar efforts going back to the 1970s, as technological utopians and political radicals tried to create new currencies to bring about their visions of the future -- whether saving privacy, destroying governments, preparing for apocalypse, or attaining immortality. This talk will take us from autonomous zones in international waters, to the finances of freezing humans for future revival, to the challenge of securing analog dollars and coins, and explore questions like: How do we learn to trust and transact different kinds of money? What makes digital objects valuable? What would it take to make a digital equivalent to cash, something that could be exchanged but not copied, created but not forged, which reveals nothing about its users? 

 

Seminar Poster >

Seminar Reflections >

Thursday 8 NOVEMBER | 2018

DLI Speaker

Fabian Okeke | DLI Doctoral Fellow

Title

Privacy and Equity in Developing Countries

Abstract

Many communities in developing countries lack health facilities, doctors and nurses to care for them. It is common to find one hospital serving several villages in a 50-mile radius. To combat this problem, communities informally train selected members as health workers who serve as extensions of health facilities to, for example, diagnose and treat pregnant women and sick children. Although previous research has studied the relevance of providing feedback to these health workers, there has been little work on collecting feedback from care recipients, who are arguably the most important stakeholders in the healthcare ecosystem. Fabian's ongoing research attempts to bridge this gap.

 

In his talk, he will discuss work in progress on the real-world challenges that emerge in the design of a beneficiary feedback system for systematically collecting feedback at scale across communities. He will share insights from his preliminary fieldwork in Kenya and uncover ongoing tension in balancing privacy and equity when designing feedback systems for low-resource contexts. 

 

Seminar Poster >

Seminar Reflections >

Thursday 1 NOVEMBER | 2018

DLI Speaker

Laura Forlano | DLI Visiting Faculty

Title

Techno-Optimistic Smart City Imaginaries: A Patchwork of Four Urban Futures

Abstract

This talk will describe four urban futures that are being imagined through and juxtaposed with techno-optimistic visions of the smart city. The City as Platform—defined by the integration of digital technologies such as WiFi terminals--is a streamlined version of city government in which many services are provided by third party companies rather than by the government itself. The City as a (New) Urban Manufacturer—defined by the integration of development of new businesses around design, digital fabrication and local manufacturing—is a rebirth of manufacturing for the purpose of economic growth. The City as Testbed—defined by the use of public roadways and simulations that are used for the testing of autonomous vehicles—is an experimental space for future technologies. And, lastly, the City as a Lab—defined by the measurement, control, tracking and surveillance of a wide range of urban processes—is a highly scientific and systematic management of urban life. 

 

Seminar Poster >

Seminar Reflections >

Thursday 25 OCTOBER | 2018

Guest

Timnit Gebru

Title

Understanding the Limitations of AI: When Algorithms Fail

Abstract

Automated decision making tools are currently used in high stakes scenarios. From natural language processing tools used to automatically determine one’s suitability for a job, to health diagnostic systems trained to determine a patient’s outcome, machine learning models are used to make decisions that can have serious consequences on people’s lives. In spite of the consequential nature of these use cases, vendors of such models are not required to perform specific tests showing the suitability of their models for a given task. Nor are they required to provide documentation describing the characteristics of their models, or disclose the results of algorithmic audits to ensure that certain groups are not unfairly treated. Timnit Gebru will show some examples to examine the dire consequences of basing decisions entirely on machine learning based systems, and discuss recent work on auditing and exposing the gender and skin tone bias found in commercial gender classification systems.

 

Seminar Poster >

Seminar Reflections >

Thursday 18 OCTOBER | 2018

DLI Speaker

Nirvan Tyagi | DLI Doctoral Fellow 

Title

Survey of Security & Privacy Concerns in Machine Learning

Abstract

In this talk, Nirvan Tyagi will give a broad survey of some of the security and privacy concerns with use of machine learning that make up the focus of recent research in the area. Topics covered include "fooling" machine learning models, "stealing" machine learning models, and understanding what information is "leaked" by machine learning models about the (potentially sensitive) data they are trained on. 

 

Seminar Poster >

Seminar Reflections >

Thursday 11 OCTOBER | 2018

Guest

Joseph Reagle | Northeastern University 

Title

The Digital Complicity of Facebook's Growth Hackers

& Chip-implanting Biohackers

Abstract 

Facebook executives did not intend to create a medium for Russian propaganda; they were preoccupied with the growth of their platform. Bio-hackers who “chip” themselves give little thought to dystopic scenarios of monitoring and control; they simply seek convenience. Are these claims, then, exonerating? No. Even if growth- and bio-hackers are not principals to harm, they can be complicit, and I apply Lepora and Goodin’s (2013) framework for complicity to these high-tech cases. However, assessing digital complicity is difficult because consequence can be contingent, even if dire, and responsibility can seem tenuous and distant. Specifically, creators’ intent and users’ embrace of problematic technology requires additional consideration. To address intent, I adapt Robert Merton’s (1936) classic essay “The Unanticipated Consequences of Purposive Social Action.” On the embrace of problematic technology, when technologies of the self become technologies of power (Foucault, 1977), I make use of Margret Olivia Little’s (1998) notion of cultural complicity. When digital complicity is likely, I conclude with how people have opposed, limited, or (at least) disclaimed the harmful uses of technology they create or embrace. 

 

Seminar Poster >

Seminar Reflections >