By Benjamin Yellin
On January 30th, 2020, the Cornell Tech community had the opportunity to hear from Kashmir Hill as part of the Digital Life Seminar Series. Hill is a reporter for the New York Times, and in her talk, she spoke about the power of facial recognition technology to improve overall safety, and perhaps more importantly, the more frightening use cases that this technology enables. Since anyone with any sort of digital trace has data collected on them constantly, it is important to consider how this data is being used now and how it could potentially be used in the future. While this talk focused on the potential dangers of facial recognition technology, many of its messages hold true for all kinds of data that are collected about individuals.
One of Hill’s main projects is a blog called The Not-So Private Parts. The goal of this blog is to expose some of the potential harms associated with the sharing of information that, at first glance, might seem harmless to share. While some people may be initially skeptical that sharing certain personal information could be dangerous, Hill shared stories about some of the tangible harmful effects that this can have. She told a story of one woman who was expecting a child and ended up having a miscarriage, but was still sent baby formula that she did not order in the mail. This could be traumatizing for a mother who just lost their child. She also told stories of sex workers coming up as suggested friends to their clients or friends of their clients. This is problematic as it exposes both parties involved in such a private activity.
The increasing availability of facial recognition technology has made it easier for people with both good and bad intentions to make use of it. With people sharing code online and the public availability of images on platforms like Flickr, many more people are able to build projects than were able to previously. Images from Flickr were compiled into a dataset known as YFCC 100m, which contains many images of children, many of whom did not know that pictures of them were publicly available online. This database was downloaded by companies including Estee Lauder and several companies in China without the people in the dataset knowing. This fits into the larger picture of people unknowingly providing their data for public use and possibly exploitation, which is an issue that people are acutely aware of these days.
These images, along with many others publicly available on the web, were then used by a company called Clearview AI, which maintained a database with millions of images of people available online and matched these images with people’s names. Clearview AI piloted their technology with the police, who used the application to identify suspected criminals. Hill, determined to uncover more information about this company, went to their offices to ask people at the company some questions. She went into their building but there was no one in their office. On her way out of the building, she ran into two people who she deduced were connected with Clearview AI, and turned out to be the founders. She told them that their actions took a step past where companies like Facebook and Google were willing to go, despite their ability to do so. By compiling so much image data and putting search capabilities into the hands of powerful people like law enforcement officers, Clearview AI ended public anonymity. Hill asked the founders what they thought the potential ramifications of what they were doing were, and they said that they had not thought about it yet and that they would get back to her.
So many pieces of technology can be used both for positive and negative purposes, and any time a new technology is created, people should take into consideration how others with questionable motives could use it as well. While more effectively identifying criminals initially seems like a good thing, people need to take into consideration how technology like this removes people anonymity and how this could be incredibly dangerous for certain groups of people. This technology could potentially also incur racial bias and be used for profiling people who would otherwise not be suspected for committing a crime.
In the process of investigating Clearview AI, Hill asked several police officers that she knew if they would be willing to run her picture through their algorithm, but each of these police officers told her that no results came up. She finally asked another police officer who told her that they did not want to give her any information because they were able to see that she was a reporter for the New York Times. She realized that this same thing happened each of the other times, but no one was else was willing to give her any information. People in positions of power have the choice to either share or not share this type of information, and this has a significant impact on what the public is able to know. Even though Hill’s information did come up when each police officer ran her picture through the Clearview platform, only one was willing to admit that this was the case. The other officers were afraid that informing a reporter for the New York Times about this would end their ability to use this powerful technology.
With any revolutionary technology like the one developed by Clearview, people using it need to understand how the technology itself actually works and what is happening to the data that gets fed into the algorithm. One police officer that Hill interviewed thought that when they ran someone’s face through the Clearview algorithm, the image stayed locally on their computer, while it actually got sent to the company and incorporated in their database. This confusion about how the application actually works among people who use the technology is scary, to say the least. It is also controversial whether a private company should be monitoring police activity, which is designed to be for the public good.
Hill shared very critical insights about the landscape of facial recognition technology and the potential negative effects that it can have on society. It is so important to consider how the technologies that we will develop could be used for both good and evil, and these discussions should continue beyond the classroom. This talk will certainly make me more thoughtful about how something I build could potentially be used by others in the future.
Benjamin Yellin is a MS student at Cornell Tech, studying Information Systems with a focus on Health Tech. You can find him on LinkedIn, or read his blog about math and baking.
The question of whether a for-profit business should be keeping tabs on what the government is trying to accomplish via police surveillance is another contentious one. subway surfers