CounterFAccTual: How FAccT Undermines Its Organizing Princinples

Ben Gansky & Sean McDonald

Research | June 2022

Most of the conversation taking place under the banner of ‘public interest technology’ focuses on how digital technologies are designed and developed — rather than on how they are actually used. Calls for more representative datasets, more diversity in engineering talent pools, and more extensive documentation processes are not necessarily bad in themselves — at least until we understand them in terms of their opportunity costs.

The theories of change that prioritize design and development-stage interventions are flawed.

First, the ‘lab-centricity’ that is reinforced by focusing on the contexts of technology development leads us to concentrate investment and attention on engineers, designers, and developers of technology. We should be concerned about the narrowness of perspectives and experiences privileged by this focus. We should also be attentive to the fact that by concentrating our reforming efforts on ‘the lab’, we are amplifying the amount of political power held there — and by the corporations who own, and rule, ‘the lab’. When the buck stops with corporate executives, we have seen time and again that their incentives lead to the development and distribution of proven-unsafe and unreliable technologies.

The ethical and justice-related demands that should shape a digital system are always contextual — public health ethics are different from those of college admissions, or tax administration. This means that critical conversations about what ‘good’ looks like for a digital system, and the work of agreeing on rules to guide the use of digital systems towards those visions of ‘good’, are best suited to the particular publics implicated by these systems — as doctors and patients, constituents and representatives, etc. Yet, in part because of the consolidation of the technology sector over the past decades, the same technical system might be deployed across varied contexts — another reason why focusing on the sites and situations of technology development misses the point, which is to align a digital system with the ethics and needs of those who are directly implicated in its use.

The structure of digital economies also prevents the efficacy of lab-centric interventions. Data brokers aggressively obscure their sources, recombining and developing derivative products that make it all but impossible for meticulous documentation of, e.g., data provenance or model characteristics, to survive its trip to end-users.

Finally, ‘fairness, accountability, and transparency’ projects tend to project their efforts in a fundamentally counter-factual universe: one in which functioning institutions and processes for due diligence in implementation and for redress of harms are working and ready to interoperate with. Unfortunately, in our world, these institutions and processes have been captured by the interests they are meant to hold accountable, intentionally hollowed-out, and/or were never designed to function in today's sociotechnical landscape. Continuing to produce (fair! accountable! transparent!) data-enabled systems that operate in high-impact areas, irrespective of this landscape's radically insufficient paths to justice, given the unavoidability of errors and/or intentional misuse in implementation, and the exhaustively-demonstrated disproportionate distribution of resulting harms onto already-marginalized communities, is a choice - a choice to be CounterFAccTual.

Full paper can be downloaded here.