DL Seminar | The Dangers of Solutionism in ML
  • Digital Life Initiative

DL Seminar | The Dangers of Solutionism in ML

Updated: Apr 11

By Srinath Narain and Konstantinos Ntalis

The March 5, 2020 edition of the Digital Life Seminar series welcomed Dr. Zachary Lipton, assistant professor at Carnegie Mellon University, for a talk on “Fairness and Interpretability in Machine Learning and the Dangers of Solutionism”.

Dr. Lipton has been working on this area with his Correct Machine Intelligence Lab which deals with this issue in areas including but not limited to healthcare, causality, robustness, ML, economics, ethics and social impact. Dr. Lipton's recent work “Does mitigating ML's impact disparity require treatment disparity?[1]” shines a deeper light on this scenario.

Machine learning today is indispensable in various aspects of society and affects multiple dimensions of human life. Manifestations of its utilization include fraud detection, self-driving workers, advisory and compliance software and the various recommendations that are provided to consumers by streaming services and social media platforms, to name a few. Dr. Lipton's work in this area has focused on the irresponsible utilization of machine learning technologies which create the false belief of a solution while the real problem continues to remain unsolved or is sometimes even made worse. He explains that the root of the issue lies in the naive way of phrasing it, creating a perilous pattern.

Lipton points out that machine learning is mostly used as a curve fitting technique, and for the most part, to provide domain-specific prediction. While this might be able to help with certain types of issues, it does not address a lot of the problems impacting us. What happens is that a real-world issue is converted into a surrogate problem, and a metric of success is defined. Then, a machine learning algorithm is used to create a model and provide a prediction. However, the whole process might suffer from an innate flaw as it originates and continues to proceed ‘with the paradigm whose insufficiencies are still the root cause”. This, according to Dr. Lipton, leads users to believe that the issue is being solved, while in reality, the issue stays at a status quo.

Dr. Lipton points out that these insufficiencies exist because of one or more of the following factors arise:

  1. Prediction: inability to predict all the elements of the problem.

  2. Inability to express the whole problem in its totality in a mathematical form.

  3. Inability to express elements of legal and ethical language in a mathematical form.

  4. Inability to incorporate societal and equity considerations into the problem and the algorithm.

Dr. Lipton illustrates this issue using various examples to point out how the inadequacies in the problem statement are leading to a reshaped, yet still unsolved, problem as the solution. One example is the so-called "bias-free resume sorter software" which aims to segregate deserving from undeserving candidates without getting biased by a societal factor or the discriminatory sorters. These models try to make groups equal while using a combination of these measures:

  1. Impact parity: outcome is independent of the group status.

  2. Treatment parity: opportunity Y depends only on X and not Z

  3. Representational parity

  4. Equalized odd/opportunity parity: Equal false false negative &/or false positive rates.

Dr. Lipton points out that in a utopian society where everyone is equal, maybe such systems could help and such ideas could work. But these measures suffer from the following inadequacies in the real world:

  1. Myopic viewpoint: attention is not paid on how the parity arose, what the impacts of the model's are, and what the responsibilities of the decision-maker are. The aim is simply to satisfy one of the parity constraints.

  2. Failure to appreciate that various parts may be mutually irreconcilable.

  3. Inability to capture legal/philosophical works and ideas in the model.

  4. Failure to address whether the decision is “justified”.

This leads the creation of models that are flawed and often do not give the desired results, while at the same time creating a false belief in a solution while still propagating systematic discrimination. Furthermore, it can be the case that these models show bizarre results. For instance, Dr. Lipton considered the famous problem of the factors that lead to admission into a university program. He showed that when hair length or field of scientific interest are taken into consideration to avoid including the protected feature (for instance gender or race), an expressive enough model is able to infer the protected feature even if it's not explicitly included in the learning process.

In conclusion, Lipton's talk explored the consequences and limitations of employing ML-based technology in the wild and the limitations of recent solutions (so-called fair and interpretable algorithms) for mitigating societal harms, and contemplates the meta-question: when should (today's) ML systems be off the table altogether?

  1. [1] arXiv:1711.07076 [stat.ML]



Cornell Tech

2 W Loop Rd,

New York, NY 10044

Get Here >

DLI Queries

Jessie G. Taft