Today, algorithms are ubiquitously making high-stakes decisions, which significantly affect people’s lives. Consequently, there is growing concern regarding how these decisions affect people’s welfare, and whether they fairly treat various individuals or communities.
As an assistant professor at the University of Toronto, Nisarg Shah focuses on mitigating and solving these issues of fairness and welfare in algorithmic decision-making. His research has made fundamental contributions in the area of computational fair division and voting theory.
On the topic of fairness, Nisarg has worked primarily on the fair allocation of joint resources across multiple agents. He is the co-inventor of the state-of-the-art method, namely Maximum Nash Welfare, which is actively being used in finding provably fair algorithmic solutions to inheritance division and divorce settlement via a not-for-profit website co-developed by him.
Nisarg also co-discovered the provably optimal method for aggregating ranked ballots, resolving a 16-year-old, open problem, and paving the way for increased adoption of ranked-choice voting. His work also shed light on complex democratic challenges such as participatory budgeting and political redistricting.
Over the past few years, Nisarg turned his attention to applying the insights obtained from the aforementioned research to understanding why broader AI systems (including machine learning systems) exhibit bias and unfairness, how to measure and mitigate such risks, and how to make AI systems more transparent, trustworthy, and explainable.
Nisarg has co-discovered a principled framework for understanding bias and fairness in AI systems, which not only allows viewing existing fairness definitions as special cases but also allows designing novel fairness definitions suitable to the application at hand.
Meanwhile, he also significantly invested in public outreach regarding bias and fairness in AI systems, educating and training interested parties to view AI systems comprehensively.