This project presents ideas to encourage a discussion about designing fairer algorithms.
Over the next few months, we’ll be publishing regular ideas to provoke discussion. sign up here to be notified when a new idea is published. the about page discusses our motivations for starting this initiative.
- Dipayan Ghosh, Ph.D., Pozen Fellow & Nicco Mele, Director
Shorenstein Center on Media, Politics, and Public Policy
Harvard Kennedy School
It is often impossible to choose between competing algorithms without making ethical judgments. They implicate basic notions of fairness, they change the character of decision-making, and they have political implications for the future of news.
Question the effects of your machine learning model. Information is intensive—it is not simply a matter of understanding the world, but also of using that understanding to shape the world.
Creating AI systems that can take culturally influenced reasoning into account is crucial for creating accurate and effective computational supports for analysts, policymakers, consumers, and citizens.
If the promise of artificial intelligence is to make systems smarter and more efficient, there may be no better candidate than the US criminal justice system. However, careful attention must be paid to the infrastructure at the heart of every artificial intelligence system: the data and its algorithms, the human beings that use it, and accountability.
Algorithms, and algorithmic discrimination, are often presented as recent phenomena. But while big data and computational power have enabled more advanced algorithms and reduced the need for human calculation, simpler algorithms have long been used to allocate private and public goods—and not without prejudice.
Artificial intelligence in diverse applications—from sex bots to war machines—is giving rise to equally diverse concerns: algorithmic bias, transparency, accountability, privacy, psychological impact, trust, and beyond. Of course, all of these issues don’t necessarily arise in all forms of AI; for instance, few people, if any, care about privacy with military robots. But one root ethical issue that does apply to the entire technology category is the general ability to make decisions. This is the linchpin issue to be examined here.
The data revolution that is transforming every sector of science and industry has been slow to reach the local and municipal governments and NGOs that deliver vital human services. The public sector is bound by a mandate for responsibility and transparency to the individuals and organizations it affects. What does this kind of data transparency look like, and how can it be built into the systems we design?
Risk assessment algorithms are changing the ways judges decide on cases. But, does AI help remove human biases from the criminal justice system, or can inherent biases built into the algorithms work to strengthen historical disparities?
Computer-driven scoring and categorization engineer practical and prejudicial discriminations that separate those who don’t count from those who do, and decide the prices specific individuals receive compared to others.
The apparent bias occurred because of other advertisers’ higher valuation of female eyeballs—something that would not have been clear from analyzing an algorithm that was simply intended to minimize costs.