The Automated Administrative State

DANIELLE CITRON
Morton & Sophia Macht Professor of Law at the University of Maryland Francis King Carey School of Law 

RYAN CALO
Lane Powell & D. Wayne Gittinger Endowed Professorship and associate professor of law, University of Washington

 

The administrative state has undergone radical change in recent decades. In the twentieth century, agencies in the United States generally relied on computers to assist human decision-makers. In the twenty-first century, computers are making agency decisions themselves. Automated systems are increasingly taking human beings out of the loop. Computers terminate Medicaid to cancer patients and deny food stamps to individuals. They identify parents believed to owe child support and initiate collection proceedings against them. Computers purge voters from the rolls and deem small businesses ineligible for federal contracts [1].

Automated systems built in the early 2000s eroded procedural safeguards at the heart of the administrative state. When government makes important decisions that affect our lives, liberty, and property, it owes us “due process”— understood as notice of, and a chance to object to, those decisions. Automated systems, however, frustrate these guarantees. Some systems like the “no-fly” list were designed and deployed in secret; others lacked record-keeping audit trails, making review of the law and facts supporting a system’s decisions impossible. Because programmers working at private contractors lacked training in the law, they distorted policy when translating it into code [2].

Some of us in the academy sounded the alarm as early as the 1990s, offering an array of mechanisms to ensure the accountability and transparency of automated administrative state [3]. Yet the same pathologies continue to plague government decision-making systems today. In some cases, these pathologies have deepened and extended. Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do. The algorithms themselves increasingly rely upon techniques, such as deep learning, that are even less amenable to scrutiny than purely statistical models. Ideals of what the administrative law theorist Jerry Mashaw has called “bureaucratic justice” in the form of efficiency with a “human face” feel impossibly distant [4].

cc1.jpeg

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. For a start, we have yet to address in any meaningful way the widening gap between the commitments of due process and the actual practices of contemporary agencies [5]. Nonetheless, agencies rush to automate (surely due to the influence and illusive promises of companies seeking lucrative contracts), trusting algorithms to tell us if criminals should receive probation, if public school teachers should be fired, or if severely disabled individuals should receive less than the maximum of state-funded nursing care [6]. Child welfare agencies conduct intrusive home inspections because some system, which no party to the interaction understands, has rated a poor mother as having a propensity for violence. The challenges of preserving due process in light of algorithmic decision-making is an area of renewed and active attention within academia, civil society, and even the courts [7].

Second, and routinely overlooked, we are applying the new affordances of artificial intelligence in precisely the wrong contexts [8]. Any sufficiently transformative technology is double-edged. On the one hand, it raises legal concerns about its potential to undermine law’s formal promises, including due process. On the other, as far less appreciated, transformative technologies invite us to take inventory of the law’s aspirational goals that are not being met.

For instance, we are using machine learning to displace the important role of a human arbiter, while leaving on the table some of its greatest advantages. Is it not remarkable, for example, that litigants who do not speak English must wait on agencies to find translators when multiple companies offer free real-time translation apps? [9] Could the same algorithms that purport to predict citizen behavior be used to organize an administrative law judge’s docket more efficiently? Could they be used to allocate funding for childcare to relieve burdens borne by poor mothers and thus prevent stress that might precipitate child abuse? Could they identify training needed for teachers to shore up their expertise rather than leading to their firing?

Third, in practice the values and commitments of the laws in jeopardy only expand. Recent critiques surface the extent to which algorithms reinforce inequality. The gains and benefits of artificial intelligence seem unevenly distributed [10]. Uber will benefit from fleets of vehicles; its drivers may not. Law and legal theory needs further development to ensure that the vulnerable are equally able to pursue life’s crucial opportunities—to work, parent, attend school, and far more. 

The history and present of the administrative state’s addiction to automation suggest a need to make fresh choices regarding the future. We are excited to be a part of the expanding academic, civil society, and industry community dedicated to preserving important legal, ethical, and dignitary safeguards while harnessing the affordances of new technology to promote human flourishing.

 

Endnotes

1. Danielle Keats Citron, “Big Data Should Be Governed by Technological Due Process,” The New York Times, July 26, 2016, https://www.nytimes.com/roomfordebate/2014/08/06/is-big-data-spreading-inequality/big-data-should-be-regulated-by-technological-due-process.

2. Danielle Keats Citron, “Technological Due Process,” Washington University Law Review 85, no. 6 (2008), https://openscholarship.wustl.edu/law_lawreview/vol85/iss6/2/.

3. Ibid.; Danielle Keats Citron, “Open Code Governance,” University of Chicago Legal Forum 355 (2008), available at https://digitalcommons.law.umaryland.edu/fac_pubs/511/; Paul M. Schwartz, “Data Processing and Government Administration: The Failure of the American Legal Response to the Computer,” Berkeley Law Scholarship Repository 43 Hastings L. J. 1322 (1991), https://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?article=1987&context=facpubs.

4. Jerry L. Mashaw, Bureaucratic Justice (New Haven: Yale University Press, 1983), cited in a related context by Paul M. Schwartz, “Data Processing and Government Administration.”

5. Colin Lecher, “What Happens When an Algorithm Cuts Your Medicare,” The Verge, March 21, 2018, https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy.

6. Virginia Eubanks, Automating Inequality (New York: St. Martin’s Press, 2017).

7. Cathy O’Neil, Weapons of Math Destruction (New York: Broadway Books, 2016); Frank Pasquale, The Black Box Society (Cambridge: Harvard University Press, 2014); Joshua Kroll et al., “Accountable Algorithms,” University of Pennsylvania Law Review 165, no. 3 (2017), https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3/.

8. A noted exception is Andrew Ferguson’s new book called The Rise of Big Data Policing, which calls for the use of Big Data for pro-social and non-punitive ends.

9. Justice Cuéllar of the California Supreme Court makes a similar point on the promise of AI-aided translation. See Mariano-Florentino Cuéllar, A Simpler World? On Pruning Risks and Harvesting Fruits in an Orchard of Whispering Algorithms, UC Davis Law Review 51, no. 27 (2017). Of course, there are dangers in AI translation, especially in high-stakes contexts. AI translation can reproduce bias and there have been high-profile mistakes.

10. Kate Crawford and Ryan Calo, “There Is a Blind Spot in AI Research,” Nature 538, no. 7625 (October 20, 2016): 311–318, https://www.nature.com/news/there-is-a-blind-spot-in-ai-research-1.20805.

Dipayan Ghosh