Blog by Nabeel Seedat | MS Student in Electrical and Computer Engineering
We can let the genie out the bottle, but do we need to regulate it? Brad Smith, President and Chief Legal Officer of Microsoft, examined this question in his talk entitled “Facial Recognition: Coming to a Street Corner Near You” at the Digital Life Seminar on February 6, 2019. Smith proposed that a regulatory floor is not only critical, but the only solution in addressing many of the hard ethical questions associated with facial recognition technology.
The talk delved into the need for public regulation and corporate responsibility around facial recognition. The context centering on the notion that our thinking around facial recognition should reflect the values of the world we want to live in.
Before delving into the meat of the talk, facial recognition was placed in context of what has heralded this upsurge in the technology. Smith noted that while humans possess the innate capacity for facial recognition from birth, the dream of replicating this behavior with a computer relies on the fact that a face can ultimately be reduced to a series of numbers to which algorithms can be applied. This dream of machine based facial recognition is not a new one, originating in the 1960s-70’s. However, 4 key factors were ultimately responsible for the state of facial recognition as we know today. These are (1) improvements to 2D and 3D cameras, (2) increased computational power in the cloud and using GPUs, (3) increased availability of data and (4) AI algorithmic advancements.
Smith argued while the hard moral questions are currently front and center, we must not dismiss the societal benefits that the technology has already provided, citing 4 examples:
Consumer convenience through face recognition providing automated banking in Australia
Police in India leveraging facial recognition to locate missing children
The National Institute of Health using face recognition to diagnose genetic diseases.
Microsoft’s own product Seeing AI to provide person recognition for the blind.
It was postulated that because of the societal ramifications that this technology can provide – both positive as presented or negative as is currently widespread in the media – addressing the hard issues of facial recognition are of profound importance. According to Smith, we simply cannot watch it unfold and hope to put this genie back in the bottle. Rather we must be proactive in regulating the genie that is facial recognition.
It is based on this premise, that the 3 key pillars of the talk regarding public regulation and corporate responsibility were built: privacy, bias, and democratic freedoms.
Smith prefaced this section with a glance back to 1890 and drawing parallels of societal opinions on the camera vs facial recognition today. He cited Warren and Brandeis, who stated that “Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life”. What developed out of this work from Judge Thomas Cooley was the right of people to be left alone. According to Smith, while society ultimately adapted to cameras and the invasion of privacy they presented, the use of facial recognition and the subsequent invasion of privacy takes this concept to another level. Now, retailers track behaviors in stores, and police track people through the streets. The question that arises is how can we retain some degree of privacy against the deployment of such technology?
Smith proposes two solutions. First, notification that facial recognition technology is being deployed – as is already the case in Europe. Second, consent of consumers opting-in for their data being utilized.
Smith next tackled the issue most widespread in the news which is bias in facial recognition. He cited the work of Joy Buolamwini of the MIT Media Lab, who found that facial recognition error rates were group-dependent, especially dropping for women and people of color. The obvious issue is the lack of robust datasets (which the latest early 2019 IBM dataset may rectify). This presents a more important issue away from technology: How can we create a well-informed market which has both the familiarity and sensitivity to tackle the issues of bias?
Smith proposes that the only way to develop a well-informed market is to ensure that facial recognition systems can be tested across three facets:
Transparency: the systems capabilities and limitations should be explainable
3rd party testing/comparisons: to test for unfair bias and accuracy
Meaningful human review: for high stake scenarios such as approval of loans
Finally, Smith delved into the notion that facial recognition has uses cases to keep the public safe. However, there needs to be a trade-off in public applications of public safety vs democratic freedoms. Not regulating usage of facial recognition technology could lead to mass surveillance on an unprecedented scale, a la George Orwell’s vision from 1984. In addressing issues related to free speech and the right to assemble, we need to explore mechanisms to limit government surveillance to areas of imminent risk, as well as imposing legal restrictions through court orders.
The key take away is that much like any technology, facial recognition clearly has benefits to society, even amidst all the controversies surrounding its application. The question that we all could think about from this seminar is: “Do our faces deserve as much protection as our phones?”