This project presents ideas to encourage a discussion about designing fairer algorithms.
Over the next few months, we’ll be publishing regular ideas to provoke discussion. sign up here to be notified when a new idea is published. the about page discusses our motivations for starting this initiative.
- Dipayan Ghosh, Ph.D., Pozen Fellow & Nicco Mele, Director
Shorenstein Center on Media, Politics, and Public Policy
Harvard Kennedy School
Learning algorithms do precisely what they are designed to do: find patterns in the data they are given. But the problem is that algorithms based on such data may well serve to introduce or perpetuate a variety of discriminatory biases, and thereby maintain the cycle of injustice.
Informed by trillions of consumer data points, today’s AI algorithms can discriminate against, or in favor of, thousands of highly specific consumer categories in myriad ways. In online retail, individualized customer profiles can determine whether discounts and incentives are offered to shoppers.
There are many opportunities for increasing user autonomy with software. When a user’s considered goals are in conflict with those of the platform with which they are interacting, the model of interaction often leaves room for the user to refactor the nature of the interaction with their own software.
What happens in games can have very real repercussions for the world outside them, particularly as modern machine learning methods give us the ability to infer so much about the person behind the keyboard or joystick. When we play games, we may not be aware that we are revealing information about ourselves. We are letting the game spy on us through the magnifying glass of machine learning.
The design process for AI-enabled public assistance systems should include the voices of those who are using them. When AI systems have unintended consequences, the negative impacts disproportionately impact vulnerable populations, which can further entrench poverty in public assistance systems.
The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. The history and present of the administrative state’s addiction to automation suggest a need to make fresh choices regarding the future.
Where researchers study the effects of social interventions and focus on statistically significant comparisons, their published results will, on average, overestimate effect sizes. And this happens even with honest, experienced, well-intentioned researchers using clean, randomized designs, as long as they are following standard practice and reporting statistically significant results.
It is often impossible to choose between competing algorithms without making ethical judgments. They implicate basic notions of fairness, they change the character of decision-making, and they have political implications for the future of news.
Question the effects of your machine learning model. Information is intensive—it is not simply a matter of understanding the world, but also of using that understanding to shape the world.
Creating AI systems that can take culturally influenced reasoning into account is crucial for creating accurate and effective computational supports for analysts, policymakers, consumers, and citizens.