Web
Analytics
DL Seminar | The Objective Function: Science and Society in the Age of Machine Intelligence
top of page
  • Writer's pictureJessie G Taft

DL Seminar | The Objective Function: Science and Society in the Age of Machine Intelligence

Updated: Apr 30, 2022


By Zal Joshi (scroll below).



By Zal Joshi

Cornell Tech


In his talk, The Objective Function: Science and Society in the age of machine intelligence, Emanuel Moss reshapes how to think about the use of machine intelligence for researching complex social issues and the power it holds in society. He begins with a story of when he first came across the use of the term “algorithm” in a non-machine learning setting. As he was working as a 3-D modelling specialist at an archeological site in Western Turkey, his co-worker made a passing comment: “we are trying to build something that you can run algorithms on to figure out what people in the past were up to”.

This begs the question: what can algorithms do, which are simply recipes for doing math, that archeologists can’t? With algorithms so prevalent in so many wide fields, the question then becomes: how has the authority of algorithms been constructed by its practitioners that give it the ability to produce knowledge over so many domains like archeology? Ultimately, Emanuel concludes that machine intelligence may seem to threaten the action of powerful actors with automation, but in reality, it can conserve the power of institutions that actually employ it.

To fully understand why this is true, Emanuel provides a framework on how to think about the ways machine intelligence actually produces knowledge. He elaborates on two major approaches: the theory approach and the theory-free approach. The theory approach can be thought of as the conventional way scientists think of conducting research through empirical scientific methods and analysis. The theory-free approach suggests that, in short, numbers convey meaning, and enough data can speak for itself. This theory suggests that all other modes of knowledge creation - like archeology, ethnography, etc. – are somewhat meaningless, as numbers are all that are needed to provide knowledge about the world. It removes the experimenter from the picture completely. Emanuel suggests that both these theories exist together but are ultimately distinct and irreconcilable, which is why machine intelligence can hold the power it does.

Next, Emanuel shows us how the authority of machine intelligence is actually constructed. He does this by describing his demo project in multi-task learning. Multi-task learning is when one objective function has multiple goals, as opposed to a standard approach which will focus on only one. His demo defines a multi-task objective function that, given a newspaper article, determines what section the article is in (sports, entertainment, politics, etc.) and the kind of publication (either tabloid or broadsheet). Along with the standard machine learning questions that come up when formulating a multi-task learning algorithm, Emanuel and his team came across other questions that are not part of the ML process. What can be known from the data itself? How does the data convey culture and meaning? Was the distinction between “tabloids” and “broadsheets” real or a construct in our minds? This led him to the awareness that by assigning labels to news articles, they were shaping what stylistic features would be available to the deep neural network they themselves were building. This points to the relationship between subjectivity and objectivity in machine intelligence and leads to the notion that an objective function itself is subjective and depends entirely on how the researcher describes it. Rather than revealing what a “broadsheet” or a “tabloid” is, the researchers are creating a reality for them through machine intelligence. The meaning of these terms now becomes the result of how the technology intervenes into the many possible meanings of the terms themselves.

So what happens when machine intelligence practices are purposefully developed to avoid intervening in a process? Algorithmic fairness seeks to avoid socially salient categories and gather high degrees of accuracy that do not perpetuate social patterns of inequity. And yet, as Emanuel describes, algorithmic fairness as a principle still transforms social patterns into stable objects by operationalizing them, thereby extending the power of machine intelligence. To show this, Emanuel draws upon the InclusiveFaceNet paper as an example. The paper attempts to showcase that facial detection algorithms have higher error rates for minority groups. The paper suggests that smile detection should work better when looking at different races since difference races have different smiles. Directly detecting race can have bad consequences should this detection be misused, so the paper uses transfer learning to first learn about race only to gain insight into smile detection and not other tasks where race is impermissible. The paper concluded that the IFN classifier has a higher accuracy rate for smile detection when taking race into consideration. But the greatest accuracy gains are for attributes that are historically racial caricatures. The transfer learning algorithm does not need to understand the concept of race in order to employ it; the paper fails to consider its problematic correlation between race (a social construct) and some underlying biological regularity. Even further, it can begin to construct new indices of race to define racial groups.

The examples Emanuel gives us clearly articulate the power machine intelligence and its wielders hold. And these examples are merely the surface. Machine intelligence intervenes in areas of high stakes like recommendation systems, recidivism algorithms, job recruitment systems, and the list is only growing as technology advances.

Emanuel makes a great case for us as technologists to question whether the intervention of a machine intelligence approach in algorithmic fairness will exacerbate the very problems that the “fairness” is trying to mitigate. He ends his talk with a profound point on minimization of errors. Minimizing error is a large part of setting an objective function, and too often errors are narrowly scoped by the data practitioners who set these objective functions. By involving relevant stakeholders before drafting an objective function, we as technologists can make it possible to challenge the power that machine intelligence holds over all of us.

bottom of page