Web
Analytics
DL Seminar | The Profiling Potential of Computer Vision
top of page
  • Writer's pictureJessie G Taft

DL Seminar | The Profiling Potential of Computer Vision

Updated: Jan 8, 2019

Julia Narakornpichit | Masters Student | Cornell Tech


Illustration by Gary Zamchick

DL Seminar Speaker Jake Goldenfein

It is quite common to hear the many political and ethical issues that arise when using computer vision to classify people. However, in DLI Research Fellow Jake Goldenfein’s presentation “The Profiling Potential of Computer Vision,” he takes a different approach and focuses on the computational systems themselves: how they understand the world and the methods used to measure and classify humans. Goldenfein seems particularly concerned about the methodology of finding what computation deems to be the “truth,” and how it can affect our reality. He explains how computer vision and other forms of data science have the potential to shift our understanding of the truth. He specified that profiling in this context referred to the legal notion of using personal data in order to evaluate a person’s particular traits. He posed the questions: What does this mean for our reality? What could be done legally to intervene?


Computer vision takes in data that is a model or image of the real world, and classifies that data. When profiling with computer vision, data is taken from the real/physical world, and then used to make decisions that impact the real world. This could potentially have great negative impacts. It could impact accessibility and how people can interact in the physical world, if it were used to control access to public and private spaces. For example, a shopping center may use facial recognition data to not grant access to someone because they are not a good enough shopper or public transport revoking one’s access because of their history of political expression.


According to Wang and Kosiniski, “there is more information available in a human face than a human can perceive or interpret.” This claims computers can understand more about a human’s face more than humans through only computational analysis, where aspects of the world are translated into mathematical models that humans themselves can no longer navigate. As a society, we have thought that when tech translates aspects of the physical world into digital entities, bits and numbers, the objects from the physical world are distorted and reduced from what they actually are. The real world is not accurately represented, but diminished. However, Wang and Kosiniski claim that the opposite is true: only computing systems have the ability to access the truth and reality. Goldenfein mentioned that deeming the results of such statistical analyses as the truth, while disregarding qualitative analysis, isn’t a technical process leading to the “truth,” but is a philosophy. Goldenfein warned attendees to be suspicious of computer vision’s claims to extract hidden meaning from images that humans cannot see or judge themselves. Although the computational systems do notice, measure, and analyze what humans cannot, we should not dismiss them entirely. They view our world differently, but should not be taken as the entire “truth.”


Studies that used statistical analysis to study human faces began with Sir Francis Galton in the 1800s who tried to find criminality in human faces. While Galton’s study was inconclusive, others continued to study correlations between human faces and personalities. However, instead of testing hypotheses grounded in theory, many of such studies have been done just for the sake of finding statistical correlations. This removes the researchers from theoretical accountability. These studies are based on the claim that by measuring someone’s physical features, we can develop an understanding of who they are and what they do, and access something about them that they don’t know about themselves. They also claim that there is a stable statistical relationship between the face and a personality type. However, they do not address the validity of its epistemological platform.

Goldenfein also considers the ideas of criminologist Nicole Rafter, who argues that while some methods of finding “truth” are erroneous, they impact our methods of producing knowledge and our views. When you develop new systems of measurement, you also get new systems of classification.


Goldenfein discusses the idea of “computational empiricism.” This is the idea that measurement is a more reliable way to gaining knowledge and finding the “truth” about a person than receiving qualitative data from a person.This idea is not new, and is the basis of medical diagnoses. Computational empiricism also decides what is measured and represented, which impacts what is classified in the real world. In addition, it is the process of “exposing or revealing the fundamental nature of reality.”


There are many concerns caused by the idea that computer vision is more than a just method, but a philosophy. It brings up questions like how to ensure that the conclusions produced by computer vision don’t double as our understanding of the world and how to address the claim that the truth and the world lies in the hands of data rather than the human spirit. We must challenge the methods used to measure people by challenging the value of the measurement in systems of knowledge and the truth and finding better measurements.

bottom of page