The Need to Regulate AI Implementation in Public Assistance Programs

EMMA COLEMAN
Public Interest Technology Fellow, New America

MYACAH SAMPSON
Civic Engagement Researcher, Engage Miami

The implementation of artificial intelligence in the public sector feels inevitable given its gradual introduction into nearly every private-sector industry over the past few decades. This, paired with the magnitude of people who access federal and state services on a daily basis, suggests that incorporating a tool known for increasing efficiency, reducing error, and maximizing output seems appropriate. But beneath the obvious benefits of AI integration, we must remember the possibilities for malintent that accompany the automation of agencies serving some of the population’s most vulnerable citizens.

The potential of AI to transform public assistance programs in the United States such as Medicare and Medicaid, public housing, Temporary Assistance for Needy Families (TANF), and the Supplemental Nutrition Program for Women, Infants, and Children (WIC) is monumental. Establishing streamlined data collection and analysis from assistance program participants and proper data sharing across agencies could liberate valuable staff time for both individual caseworkers and entire agencies that would otherwise be spent on tedious data management. This, in turn, could result in more time for caseworkers to spend with clients, increased client knowledge of the public assistance programs for which they are eligible, and reduced enrollment hassle.

This promise is complicated, however, by the lack of an established track record in using AI for public social good. Take, for example, the case of Indiana’s automated and privatized welfare eligibility system. In 2006, the state signed a 10-year contract with IBM and call center company Affiliated Computer Services (ACS) to automate its public assistance eligibility processes. For those applying to means-tested public assistance programs like Medicaid or the Supplemental Nutrition Assistance Program (SNAP), this meant that rather than having their eligibility verified by caseworkers, they were approved or denied assistance by an algorithm and case management system developed by IBM.

em1.jpg

To meet contract demands for faster eligibility decisions, IBM’s data management system and call center workers denied thousands of recipients assistance for “failure to cooperate” [1]. IBM’s data management system frequently lost both applicant records and documents submitted by clients in a timely manner. Meanwhile, the algorithms were only capable of assigning blame to applicants for missing documents. As a result, thousands of Hoosiers lost assistance through no fault of their own, including a woman denied Medicaid for failing to answer a call from ACS while she was hospitalized for heart failure [2].  Indiana ended up canceling the contract three years later and sought more than 170 million dollars in damages from IBM, in a case that was eventually ruled in the state’s favor by the Indiana Supreme Court [3].

Another well-intended use of AI gone awry took place in Los Angeles in 2013. United Way of Greater Los Angeles and the Los Angeles Chamber of Commerce launched a digital registry to streamline the difficult decisions associated with determining who among the 60,000 unhoused people in the region would receive housing assistance [4]. Unhoused Angelenos answered a survey, which asked questions about their mental health, experiences with domestic violence, sexual history, and drug use, to determine their vulnerability for hospitalization or death on a scale of one to 17. A second algorithm attempted to match the most vulnerable people surveyed with housing and services that were, in theory, best suited to their needs.

But nearly five years later, 21,500 unhoused participants have received no assistance whatsoever [5]. Though the digital system was touted as a more efficient and fair way to distribute housing resources in the city, it functioned instead as a cost-benefit analysis system, funneling the most vulnerable into medical services and the least vulnerable into rapid-rehousing programs. Those in the middle—relatively healthy people who had been living unhoused for too long to qualify for rapid-rehousing initiatives—were left out in the cold.

So while contracting out administrative services might appeal to policymakers as a way to reduce costs and improve the efficiency of public assistance, poorly designed data management systems, non-case-sensitive algorithms, and untrained staff also carry the risk of disrupting the lives of large numbers of constituents. AI can truly only be successful in the public sector, specifically in the realm of public assistance, through thoughtful design and proper regulation.

Primarily, it is important to note that AI systems tasked with achieving beneficial outcomes such as efficiency or “fairness” will take the most efficient route to arrive there—and may develop harmful methods to complete that task. When the state of Indiana attempted to give constituents faster notification of their eligibility (a beneficial outcome), an unintended consequence resulted in the removal of thousands of applicants. Similarly, LA’s attempt to make housing assistance decisions more fair saw many homeless people deemed too expensive for housing assistance, but not vulnerable enough for emergency services. Clearly, the creation of AI systems tasked with public assistance decisions must be meticulously designed—and herein lies a new element of policymaking.

em2.jpg

Part of that design process can and should include the voices of those who are using the systems. When AI systems have unintentional consequences, the negative impacts disproportionately impact vulnerable populations, which can further entrench poverty in public assistance systems. Therefore, low-income citizens who frequently access public assistance platforms should be intimately involved in the design and implementation processes for said programs. Their input and recommendations, if reviewed by policymakers, engineers, and designers, can improve programs both during design and while they’re in operation.

Various current projects around the country provide hope for a method like this. Propel, an app and web platform for SNAP recipients, routinely surveys users about their experience with SNAP and provides those results to policymakers [6]. The city of Boston established a Housing Innovation Lab and went out into communities to ask residents what they’d like to see out of improved affordable housing policies [7]. Public and private initiatives like these prove that not only are these user-informed methods possible, but they can make the experience better suited to individual and community needs.

A commitment to this system of input and review from the beginning can help ensure that the implementation of quality AI for the public good is an achievable milestone, and a necessary one. Only systems that ensure the conscious design of algorithms and data systems that match the concerns and needs of real citizen users can positively transform public assistance programs of the future.

Endnotes

1. Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press, 2018).

2. Ibid.

3. Rick Callahan, “Appeals Court Affirms Ruling That IBM Owes Indiana $78M,” Associated Press, September 28, 2018, https://www.apnews.com/36ba0562a02142e5adfe39518e2e0f85.

4. Mark Horvath, “Coordinated Entry Helping Los Angeles House Homeless People.” HuffPost, November 18, 2018, https://www.huffingtonpost.com/mark-horvath/coordinated-entry-helping_b_4297895.html.

5. Virginia Eubanks, Automating Inequality.

6. Propel, https://www.joinpropel.com/.

7. “Housing Innovation Lab,” New Urban Mechanics Department, City of Boston, July 26, 2018, https://www.boston.gov/departments/new-urban-mechanics/housing-innovation-lab.

Dipayan Ghosh