Publications & Talks

Please reload

Critical Reflections

According to The Economist, with the emergence of a data-driven economy, "Conventional antitrust thinking is being disrupted from within" and the inability to regulate such emerging market might have an impact on the dynamics of value creation within digital capitalism. Read more >

They claim not to sell data, but Facebook and Google have paid developers with data. This selling of your personal data is central to platform economics. Could quality signals change the terms of the bargain? Google and Facebook’s claims about data selling are as false as they are adamant. To understand why one needs to change focus from the platform-advertiser relationship, and concentrate on developer-platform incentives. To grow their platforms, Google and Facebook reward developers—in effect, pay them—with personal information. But even experts in the field miss the point because information practices are opaque and misleading, and because most experience platforms as consumers rather than as developers. Read more >

By Jake Goldenfein (Cornell Tech) and Mainack Mondal (IIT Kharagpur)

Cyber-attacks are ubiquitous, but in 2019 we’ve already experienced data breaches from automotive companies, ransomware attacks on city governments, and targeted smart phone exploits through Facebook owned WhatsApp. This combination of vehicle manufacturer, government, and tech giant is meaningful because these are also the entities involved in researching, building and operating autonomous transport systems. Those attacks resulted in leaking of personal data, paralyzing government systems, and more tragically with the WhatsApp attack, giving control of devices used by activists and journalists to states interested in detaining or silencing them. While some attacks may be inconvenient or costly, where secure communications are essential cyber-attacks are also incredibly dangerous. Nowhere is this clearer than in the world of autonomous vehicles, where a compromised vehicle, or fleet of vehicles, can be life-threatening. Read more >

The academic funding scandals plaguing 2019 have highlighted some of the more problematic dynamics between tech industry money and academia (see e.g. Williams 2019, Orlowski 2017). But the tech industry’s deeper impacts on academia and knowledge production actually stem from the entirely non-scandalous relationships between technology firms and academic institutions. Industry support heavily subsidizes academic work. That support comes in the form of direct funding for departments, centers, scholars, and events, but also through the provision of academic infrastructures like communications platforms, computational resources, and research tools. In light of the reality that infrastructures are themselves political, it is imperative to unpack the political dimensions of scholarly infrastructures provided by big technology firms, and question whether they might problematically impact knowledge production and the academic field more broadly. Read more >

Facebook recently published a detailed plan for the establishment of an “independent Oversight Board”, which will review content moderation cases that arise within the platform. In a press call, Facebook explained that their plan is the result of almost a year’s work with 100+ people within the company, and supported by an extensive consultation process that included roundtables, workshops and discussions with hundreds of people around the world. The purpose of the board, explained Facebook in a charter, is “to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook’s content policies.” The company further argued that the board will improve their accountability and decision-making, and expressed hope that additional companies will follow suit. Furthermore, a letter issued by Mark Zuckerberg expresses the view that “I don’t believe private companies like ours should be making so many important decisions about speech on our own”.  Read more >

When firms with large amounts of personal data consolidate through mergers or acquisitions, we should be concerned about the potential threats arising from the newly combined data power they wield. A lot of attention has been paid to such cases where the personal data involved concerns individuals who are  direct users or customers of the consolidating entities. But less attention has been paid to cases where the data involved is collected via third party trackers scattered across websites and apps – where the people involved have no direct relationship with the merging entities. Our paper focuses on this problem, addressing the role that antitrust authorities should (but have so far failed) to play, with regard to mergers and acquisitions in the third party tracking industry. The paper combines an empirical methodology with a critical inquiry into the existing antitrust precedents on these issues in the US and Europe to argue that a bolder approach is needed - one that engages in a pluralist analysis of economic and noneconomic concerns about concentrations of control over data. Read more >

Beginning on January 1 of this year, California’s Consumer Privacy Act (the CCPA) went into effect. The CCPA is the first bill of its kind in the US. The law is widely considered to provide the strongest consumer data protection in the US, and has been the subject of considerable speculation (and lobbying) since it began as a popular referendum in 2017. While the CCPA does offers a suite of consumer protections to California residents, it also has several shortcomings that may significantly undermine some of the law’s most ambitious protections for privacy in the digital age.  Below, I offer an overview of the bill as well as a few thoughts on what it does (and does not) achieve for consumer privacy. Read more >

Who bears responsibility for the real-world consequences of technology? This question has been unduly complicated for decades by the 1996 legislation that provides immunity from liability to platforms that host third-party content. According to Section 230 of the Communications Decency Act, written before platforms such as Facebook, YouTube and Twitter existed: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This one sentence has been interpreted as essentially freeing the “provider”, including social media platforms, from responsibility for the content they host (with a few carve-outs for things such as intellectual property infringements and sex trafficking). Read more >

Please reload

Research News

For more information on the latest conference activities, publication updates, and accolades of the DLI community, visit our news section or follow us on Twitter


Increasingly, decisions and actions affecting people's lives are determined by automated systems processing personal data. Excitement about these systems has been accompanied by serious concerns about their opacity and the threats that they pose to privacy, fairness, and other values. Recognizing these concerns, the investigators seek to make real-world automated decision-making systems accountable for privacy and fairness by enabling them to detect and explain violations of these values. The technical work is informed by, and applied to, online advertising, healthcare, and criminal justice, in collaboration with and as advised by domain experts.

Addressing privacy and fairness in decision systems requires providing formal definitional frameworks and practical system designs. The investigators provide new notions of privacy and fairness that deal with both protected information itself and proxies for it, while handling context-dependent, normative definitions of violations. A fundamental tension they address pits the access given to auditors of a system against the system owners' intellectual property protections and the confidentiality of the personal data used by the system. The investigators decompose such auditing into stages, where the level of access granted to an auditor is increased when potential (but not explainable) violations of privacy or fairness are detected. Workshops and public releases of code and data amplify the investigators' interactions with policy makers and other stakeholders. Their partnerships with outreach organizations encourage diversity. 

The Digital Life Initiative developed the Augmented Histories Project to unite partners from the fields of museology, tech, human rights, dance, and education to explore the ways in which emergent technology can redress hidden societal narratives (both past and present). The applied research objectives range from (i) integrating augmented reality systems and web-mapping platforms to digitally transmute urban and rural environments into historical canvases, to (ii) experimentations that activate dance, gesture, and data from the archive. In concert with these technical and creative ambitions is the aim to (iii) design curricula that blurs the disciplinary boundaries between computational, curatorial, and choreographic thinking. 


The exploratory project was conceived and designed by DLI Research Fellow Michael Byrne, who noted that Eleanor Roosevelt was unrepresented within Four Freedoms Park (or formally acknowledged anywhere on Roosevelt Island), echoing the systemic omission of women from sites of public commemoration and historical discourse. Read more >

Please reload




Cornell Tech

2 W Loop Rd,

New York, NY 10044

Get Here >

DLI Queries

Jessie G. Taft