Web
Analytics
top of page
Writer's pictureDigital Life Initiative

DL Seminar | Fairness in Sociotechnical Machine Learning Systems

Updated: Jun 17


Reflection by Carwyn Rahman (scroll below).



By Carwyn Rahman

Cornell Tech


The final installment of this year’s Digital Life Seminar series fittingly addressed a topic that in recent months has captured the world’s attention, the general public and policymakers alike. Sina Fazelpour’s talk centered on a profound uncertainty – how the increasing adoption of algorithmic decision-making will shape society – and how we might ensure that algorithmic tools used by our institutions align with our values, using both technical and policy interventions.


Fazelpour opened the lecture by highlighting the growing ubiquity of algorithmic decision-making tools in all aspects of our lives. Whether it's for hiring processes, social media algorithms, or healthcare support systems, these tools are purported to enhance efficiency and societal benefits. However, real-world applications often reveal significant risks and biases, particularly affecting marginalized communities.


The Gender Shades project, which assessed the performance of commercial facial recognition tools, exemplifies the problem. While these tools performed similarly when looking at aggregate data, disaggregated data revealed disturbing biases, particularly against women and individuals with darker skin tones. This invoked the central questions of the lecture: how can we mitigate these issues in a way that aligns with our societal values, and does focusing on fairness measures reliably address concerns about harmful algorithmic biases? Sina referenced the touting of so-called "bias-free" algorithms and whether we can trust their claims of fairness. While many initiatives are underway to regulate these systems, such as the EU AI Act, there are concerns regarding the accuracy and efficacy of the measures employed in their development and deployment.


Sina explained the notion of "ideal theorizing," in which the ‘ideal’ solution is identified, and steps are taken to work backwards from there. Specifically, in conceiving of guidelines to prescribe to agents who want to act justly, in our unjust world, one starts by developing a conception of an ideally just world or domain. This conception serves two functions: an evaluative function, in which injustices are detected via deviation of the actual from the ideal, and a target function, which orients justice-seeking efforts towards reducing these deviations.


However, he warned against its limitations. By focusing too much on the ideal, we might miss injustices along the way, neglect issues with construct validity, and fail to assign responsibilities appropriately. Ideal theorizing in machine learning faces the same types of challenges as those faced by ideal theorizing more generally: a) inadequate and distorting diagnoses, b) insufficiency for parsing and assigning responsibilities, and c) inadequate or misguided, prescriptive guidance. Current algorithmic fairness efforts frequently overlook the multiple decision-makers involved in complex systems, a factor of paramount importance in a heterogeneous world. Fazelpour referenced Jerry Gaus's complex system theory, emphasizing that local decision-makers' success depends on the broader socio-economic systems in place. Therefore, any audit aiming to evaluate such algorithms must consider these complex interactions and dynamics.


Fazelpour proposed an approach that takes into account the complex social dynamics inherent in real-world applications. In most cases, algorithmic properties are evaluated within an artificial 'sandbox,' detached from the realities and complexities of societal dynamics. Acknowledging these complex social dynamics, such as incentive effects, strategic behaviors, and partial compliance, is crucial. Especially when multiple decision-makers are involved, the fairness measures' impacts could backfire, potentially harming those they were designed to protect.


Fazelpour's analysis of fairness in machine learning systems reveals significant challenges. He proposes a reinterpretation of impossibility results, which prima facie suggest that a single decision-making action cannot end all disparity. Rather than taking them to mean that any efforts towards fairness are bound to fail, they simply confirm that we live in a non-ideal world. As he and Lipton argue in their paper:


“If matching the ideal in various respects simultaneously is impossible, then we require, in addition to an ideal, a basis for deciding which among competing discrepancies to focus on. In this manner, the impossibility results in fair ML provide a novel lens to approach the philosophical debate about the extent to which normative theorizing on matters of justice can proceed in isolation from empirical socio-historical facts.” [1]


Alternative approaches include focusing on process-based and trajectory-based views. The process-based view emphasizes addressing fairness at different stages of the algorithm pipeline, whereas the trajectory view seeks to capture the dynamic interactions over time.

 

Fair machine learning in the context of our digital lives

 

In the broader context of digital life, this lecture underscored the growing complexity and ethical challenges associated with machine learning technologies. As we delegate more decision-making power to these algorithms, questions about fairness, transparency, and accountability become more pressing. In the rush to incorporate AI and ML into our societal fabric, we must ensure the fairness and ethics are not an afterthought. By continually challenging and questioning the algorithms that underpin our digital lives, we can work towards a future where technology enhances societal equity rather than exacerbating existing disparities.


In conclusion, Fazelpour's insightful lecture made a compelling case for a more holistic and contextually-aware approach to fairness in machine learning systems. As algorithms supplant direct human decision-making in many realms, it is imperative that they are designed to reflect our broader values (to the extent there is consensus). The Digital Life Initiative fosters crucial multidisciplinary dialogue, tackling the complex interplay between technology, policy, law and ethics. Ensuring that the growing research in this area penetrates legislation and corporate policy is now crucial to ensuring that the new trajectory on which artificial intelligence will inevitably take us is a fair, just and safe one.


[1] Sina Fazelpour and Zachary Lipton (2020). Algorithmic fairness from a non-ideal perspective. Conference on Artificial Intelligence, Ethics & Society.




9 Comments


Chukiol Laoks
Chukiol Laoks
Oct 02

Your site reads like a casual chat because of the conversational tone you use.

run 3

Like

Ahmed mujtaba
Ahmed mujtaba
Sep 30

The article looks magnificent, but it would be beneficial if you can share more about the suchlike subjects in the future. Keep posting. giftcardmall/mygift

Like

Ric
Ric
Sep 18

While these tools performed similarly when looking at aggregate data, disaggregated data revealed disturbing biases, particularly against women and individuals with darker skin tones. Area Code

Edited
Like

Sophia Seo
Sophia Seo
Sep 10

Thanks for sharing this quality information with us. I really enjoyed reading. Will surely going to share this URL with my friends. Dubai visa

Like

Jenson Jenson
Jenson Jenson
Sep 10

The conversational tone of your writing makes your blog feel like a friendly conversation. It's a pleasure to read and learn from. translation services fort lauderdale

Like
bottom of page