Web
Analytics
An Accounting of Accountability Scholarship in FAccT 2022
top of page
  • Writer's pictureDigital Life Initiative

An Accounting of Accountability Scholarship in FAccT 2022


By A. Feder Cooper (DLI Doctoral Alum, Cornell University)



FAccT, or the ACM Conference on Fairness, Accountability, and Transparency, has become the flagship venue for sociotechnical work in computing. However, FAccT’s primacy has not come about without conflict. Among numerous critiques, one common refrain is that the conference expresses a bias toward two styles of work: fairness-related research and quantitative methods. Moreover, these two styles are even narrower than their descriptors imply; typically, fairness-related studies limit themselves to machine learning (ML), such that a given work frequently involves both fairness and quantitative methods, rather than just one of the two, thereby falling under the single overarching area of “algorithmic fairness” research.


Work published this year at FAccT, which takes a retrospective approach to identifying topics at the prior four years of FAccT conferences [1] (2018-2021, full proceedings papers) largely supports the above critique. In their mixed-methods study, Laufer, et al. analyze the dominant themes in FAccT scholarship, providing complementary qualitative and quantitative results that show a clear bias toward fairness and quantitative papers. Their manual, qualitative paper coding analysis shows that 69% of all publications involve fairness, with transparency and accountability each following with 26%, respectively, for which 61.3% of papers can be categorized (or self-categorize) as “STEM” or “technical” works; their unsupervised quantitative analysis using topic modeling and citation networks corroborates this finding by yielding multiple, overlapping fairness- and machine-learning-specific categories. Of particular note in their topic model analysis is that, while accountability remains relatively under-examined in relation to fairness, there has been a year-over-year increase in accountability-focused papers over the first four years of FAccT.


A natural extension of this finding is to ask how accountability fared in 2022, the fifth year of the FAccT conference. FAccT 2022 was the largest conference to-date with over twice as many submissions in comparison to 2021. Did these new additions to the FAccT corpus extend the pattern of increasing interest in sociotechnical accountability scholarship? What forms did work on accountability take this year? How did this work complement, extend, or reimagine prior work on accountability? In the remainder of this blog post, I extend the work of Laufer et al. to the FAccT 2022 proceedings, with an eye turned particularly toward these questions concerning accountability scholarship. I walk through my preliminary findings concerning the state of accountability scholarship at FAccT, and close with directions and questions for future work.


Identifying Accountability-Related Work in FAccT 2022


I scraped the 2022 FAccT proceedings, which contains the archival, full-length publications. Laufer et al. used the paper PDFs to generate their dataset, which led to various conversion and data quality errors and thus required a time-consuming, two-phase iterative cleaning process that cycled between automated and manual cleaning. This work was documented to take on the order of 50 hours for 186 papers. To avoid such a substantial time investment, rather than converting the PDF versions of the papers to text files, I pulled the html versions of the papers, which ACM creates for accessibility purposes. These versions contain the raw text and therefore are not subject to the same data quality issues as the PDFs versions. As a result, I was able to use the code repository that accompanies Laufer et al. to perform automated data cleaning with more-limited manual intervention for the 181 papers published this year.


For consistency, I applied the topic model used (and exported to the repository) by Laufer et al. to assign the 20 topics identified from 2018-2021 to the papers in 2022.[2] I spot-checked the quality of the topic assignments by examining a random selection of papers from the corpus, and then narrowed in to look at the papers identified to have accountability as a focus.


My analysis identified 16 papers (of 181), for which the model from Laufer et al. assigned “accountability” as a topic—less than 10% of all FAccT 2022 papers. Despite their limited number, these papers cover a wide variety of concepts across computing and other disciplines—case studies of specific technical systems, framework requirements [3] for accountability in ML, philosophical and normative discussions of underlying logics of accountability, legal tech pieces regarding issues like trade law and torts, and labor organizing in the tech industry. Notably, none of these papers was explicitly about an algorithm or other quantitative-focused approach, and only one paper, Making the Unaccountable Internet, did not concern machine learning or artificial intelligence. [4]


Takeaways, Risks, and Open Questions


From this preliminary analysis, it is not clear exactly whether or not the trend among the first four years of FAccT has continued into 2022 — that there is (or is not) an increasing emphasis on accountability-related scholarship within the community. Beyond this quantitative observation, however, there are more interesting takeaways and questions concerning the work published this year.


For one, many of the works lack a concrete or working definition of what is meant by “accountability.” In some cases the definition is tacit, while in others it is implied that certain technical interventions, such as audits or explainability, will facilitate accountability—a premise that is not itself justified or interrogated, but presented as an accepted idea within the community. Other works, still, mention accountability, but do so only to list “fairness, accountability, and transparency” as a general framing device for the salience of a work to the FAccT community without engaging directly with accountability in itself.


All told, the published papers from 2022 give the general impression that accountability-specific interventions remain under-explored in the FAccT literature. Put differently, the works this year paint the picture of a field still defining its scope and contours. There is a clear, cross-cutting interest to articulate principles of or requirements for accountability, particularly concerning machine learning systems. And it is clear that accountability touches questions concerning every such system — who these systems benefit, who they harm, what motivates their use, etc. However, substantial work remains for concrete, actionable interventions—specific frameworks and empirical investigations that put frameworks and tools to the test. Without the growth of this work, it seems that FAccT runs the risk of assuming that technical interventions like audits are constitutive of accountability (without contesting this assumption), or potentially leaving accountability as a vague or catch-all term for values that are “not quite fairness” or “not quite transparency” — a blanket concept to signal normative or sociotechnical attention more generally, rather than accountability specifically.


[1] The author of this post is a co-author on this paper and was responsible for delivering the unsupervised learning results. [2] A future extension would be to re-run the entire modeling process, identify new topics, and reassign them across the five years. [3] The author of this post is a lead author on this paper. [4] The author of this post is the lead author on this paper.




A. Feder Cooper

DLI Doctoral Fellow (Alum)



Cornell Tech | 2022


bottom of page