Algorithmic technologies powered by AI are fast increasing in sophistication. These technologies promise to generate tremendous benefit to society. Already, we have seen advanced decision-making tools address difficult medical challenges, route traffic with near-optimal efficiency, and curate media feeds to stunning degrees of personalization. The social and economic advantages to humanity of such technologies are clear for all to see.
But even as advanced algorithmic technologies yield many significant leaps forward, a vicious and deep-rooted set of concerns has emerged in parallel. In the past few years, discriminatory and harmful consequences have grown by multitudes and at scale, spreading like an insidious band of weeds ready to take root and pull us straight back down to the ground, face-first.
The warning signs have long been in place, too. Consider the use of algorithms by the Federal Housing Administration as early as the 1930s that silently but systematically subverted the financial outcomes and geographic spread of marginalized, minority classes of the American population by downgrading a property’s mortgage score if it was in an urban area and near to “inharmonious racial groups.” Or take the case of a Boston-based app that so effectively crowdsourced resident-provided information about the occurrence of potholes that the municipality partnered with the app-makers to efficiently dispatch repair jobs—only to find over time that pothole repairs in younger, richer neighborhoods were being prioritized over all others.
Perhaps the new set of harms wrought by novel algorithmic technologies is most vividly illustrated over the internet. To take one example: digital advertising campaigns that promote STEM-related work to men more frequently than women—and not for the reasons you might think. In practice, machine learning algorithms that price the ad placements through automated auctions invisible to the public have determined that showing those ads to more men is cheaper than the comparably more expensive female advertising base. Paid content aside, social media platforms are now unerringly optimized to prioritize the content that their algorithms think will engage users for the longest amount of time, thus spotlighting nefarious content that nevertheless draws the human mind in—proliferating hateful conduct, mistruths and misleading narratives, rank and outright disinformation, and discriminatory content.
These and many other topics are at play in this collection of essays written by policy experts and academics, each of whom has contributed a big idea on the future of fair algorithms. Over the next few weeks, we’ll be publishing regular ideas to provoke discussion. You can sign up here to be notified when a new idea is published.
What these experts ultimately conclude is that these societal tensions being brought to the fore by algorithmic technologies do not suggest that we cease development of AI. Instead, they suggest that we should invite an open, vibrant, and vital set of discussions about what is right—or put differently, what the moral nature of any algorithm should entail. Only by engaging in this crucial conversation can we determine the contours for “ethical” AI.
This set of concerns is fast emerging as one as the most pivotal public policy issues of our time. Our purpose in presenting this curated anthology is to help the public grapple with these difficult considerations, and to support the work of policymakers in the United States and beyond as we earnestly evaluate right and wrong in the age of high computing.
This initiative, a project of the Shorenstein Center at the Harvard Kennedy School which was first begun when Dipayan was a Fellow at New America, the Washington-based public policy think tank, is an effort to bring some of the most impactful ideas around the development of ethical algorithms to policymakers the public.
- Dipayan Ghosh, Ph.D., Pozen Fellow & Nicco Mele, Director,
Shorenstein Center on Media, Politics, and Public Policy,
Harvard Kennedy School