Web
Analytics
DLI | Research | Projects
 

Publications & Talks

Please reload

Critical Reflections

The academic funding scandals plaguing 2019 have highlighted some of the more problematic dynamics between tech industry money and academia (see e.g. Williams 2019, Orlowski 2017). But the tech industry’s deeper impacts on academia and knowledge production actually stem from the entirely non-scandalous relationships between technology firms and academic institutions. Industry support heavily subsidizes academic work. That support comes in the form of direct funding for departments, centers, scholars, and events, but also through the provision of academic infrastructures like communications platforms, computational resources, and research tools. In light of the reality that infrastructures are themselves political, it is imperative to unpack the political dimensions of scholarly infrastructures provided by big technology firms, and question whether they might problematically impact knowledge production and the academic field more broadly. Read more >

When firms with large amounts of personal data consolidate through mergers or acquisitions, we should be concerned about the potential threats arising from the newly combined data power they wield. A lot of attention has been paid to such cases where the personal data involved concerns individuals who are  direct users or customers of the consolidating entities. But less attention has been paid to cases where the data involved is collected via third party trackers scattered across websites and apps – where the people involved have no direct relationship with the merging entities. Our paper focuses on this problem, addressing the role that antitrust authorities should (but have so far failed) to play, with regard to mergers and acquisitions in the third party tracking industry. The paper combines an empirical methodology with a critical inquiry into the existing antitrust precedents on these issues in the US and Europe to argue that a bolder approach is needed - one that engages in a pluralist analysis of economic and noneconomic concerns about concentrations of control over data. Read more >

Facebook recently published a detailed plan for the establishment of an “independent Oversight Board”, which will review content moderation cases that arise within the platform. In a press call, Facebook explained that their plan is the result of almost a year’s work with 100+ people within the company, and supported by an extensive consultation process that included roundtables, workshops and discussions with hundreds of people around the world. The purpose of the board, explained Facebook in a charter, is “to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook’s content policies.” The company further argued that the board will improve their accountability and decision-making, and expressed hope that additional companies will follow suit. Furthermore, a letter issued by Mark Zuckerberg expresses the view that “I don’t believe private companies like ours should be making so many important decisions about speech on our own”.  Read more >

By Jake Goldenfein (Cornell Tech) and Mainack Mondal (IIT Kharagpur)

Cyber-attacks are ubiquitous, but in 2019 we’ve already experienced data breaches from automotive companies, ransomware attacks on city governments, and targeted smart phone exploits through Facebook owned WhatsApp. This combination of vehicle manufacturer, government, and tech giant is meaningful because these are also the entities involved in researching, building and operating autonomous transport systems. Those attacks resulted in leaking of personal data, paralyzing government systems, and more tragically with the WhatsApp attack, giving control of devices used by activists and journalists to states interested in detaining or silencing them. While some attacks may be inconvenient or costly, where secure communications are essential cyber-attacks are also incredibly dangerous. Nowhere is this clearer than in the world of autonomous vehicles, where a compromised vehicle, or fleet of vehicles, can be life-threatening. Read more >

They claim not to sell data, but Facebook and Google have paid developers with data. This selling of your personal data is central to platform economics. Could quality signals change the terms of the bargain? Google and Facebook’s claims about data selling are as false as they are adamant. To understand why one needs to change focus from the platform-advertiser relationship, and concentrate on developer-platform incentives. To grow their platforms, Google and Facebook reward developers—in effect, pay them—with personal information. But even experts in the field miss the point because information practices are opaque and misleading, and because most experience platforms as consumers rather than as developers. Read more >

According to The Economist, with the emergence of a data-driven economy, "Conventional antitrust thinking is being disrupted from within" and the inability to regulate such emerging market might have an impact on the dynamics of value creation within digital capitalism. Read more >

Please reload

Research News

For more information on the latest conference activities, publication updates, and accolades of the DLI community, visit our news section or follow us on Twitter

Projects

Increasingly, decisions and actions affecting people's lives are determined by automated systems processing personal data. Excitement about these systems has been accompanied by serious concerns about their opacity and the threats that they pose to privacy, fairness, and other values. Recognizing these concerns, the investigators seek to make real-world automated decision-making systems accountable for privacy and fairness by enabling them to detect and explain violations of these values. The technical work is informed by, and applied to, online advertising, healthcare, and criminal justice, in collaboration with and as advised by domain experts.

Addressing privacy and fairness in decision systems requires providing formal definitional frameworks and practical system designs. The investigators provide new notions of privacy and fairness that deal with both protected information itself and proxies for it, while handling context-dependent, normative definitions of violations. A fundamental tension they address pits the access given to auditors of a system against the system owners' intellectual property protections and the confidentiality of the personal data used by the system. The investigators decompose such auditing into stages, where the level of access granted to an auditor is increased when potential (but not explainable) violations of privacy or fairness are detected. Workshops and public releases of code and data amplify the investigators' interactions with policy makers and other stakeholders. Their partnerships with outreach organizations encourage diversity. 

Please reload

 
 
 

Contact

Address

Cornell Tech

2 W Loop Rd,

New York, NY 10044

Get Here >

DLI Queries

Jessie G. Taft

jgt43@cornell.edu