This project presents ideas to encourage a discussion about designing fairer algorithms.
Over the next few months, we’ll be publishing regular ideas to provoke discussion. sign up here to be notified when a new idea is published. the about page discusses our motivations for starting this initiative.
- Dipayan Ghosh, Ph.D., Pozen Fellow & Nicco Mele, Director
Shorenstein Center on Media, Politics, and Public Policy
Harvard Kennedy School
What happens in games can have very real repercussions for the world outside them, particularly as modern machine learning methods give us the ability to infer so much about the person behind the keyboard or joystick. When we play games, we may not be aware that we are revealing information about ourselves. We are letting the game spy on us through the magnifying glass of machine learning.
The design process for AI-enabled public assistance systems should include the voices of those who are using them. When AI systems have unintended consequences, the negative impacts disproportionately impact vulnerable populations, which can further entrench poverty in public assistance systems.
The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. The history and present of the administrative state’s addiction to automation suggest a need to make fresh choices regarding the future.
Where researchers study the effects of social interventions and focus on statistically significant comparisons, their published results will, on average, overestimate effect sizes. And this happens even with honest, experienced, well-intentioned researchers using clean, randomized designs, as long as they are following standard practice and reporting statistically significant results.
It is often impossible to choose between competing algorithms without making ethical judgments. They implicate basic notions of fairness, they change the character of decision-making, and they have political implications for the future of news.
Question the effects of your machine learning model. Information is intensive—it is not simply a matter of understanding the world, but also of using that understanding to shape the world.
Creating AI systems that can take culturally influenced reasoning into account is crucial for creating accurate and effective computational supports for analysts, policymakers, consumers, and citizens.
If the promise of artificial intelligence is to make systems smarter and more efficient, there may be no better candidate than the US criminal justice system. However, careful attention must be paid to the infrastructure at the heart of every artificial intelligence system: the data and its algorithms, the human beings that use it, and accountability.
Algorithms, and algorithmic discrimination, are often presented as recent phenomena. But while big data and computational power have enabled more advanced algorithms and reduced the need for human calculation, simpler algorithms have long been used to allocate private and public goods—and not without prejudice.
Artificial intelligence in diverse applications—from sex bots to war machines—is giving rise to equally diverse concerns: algorithmic bias, transparency, accountability, privacy, psychological impact, trust, and beyond. Of course, all of these issues don’t necessarily arise in all forms of AI; for instance, few people, if any, care about privacy with military robots. But one root ethical issue that does apply to the entire technology category is the general ability to make decisions. This is the linchpin issue to be examined here.