[Please Note: This Ingram Olkin Forum session has already occurred. Go to the News Story for this event to read about what happened.]
Algorithms used for policies, evaluations, and discoveries that involve or affect people have become commonplace. Typically constructed by statistical and machine learning (AI) methods they may harbor biases affecting individuals or subgroups of individuals in the society, resulting in questions of fairness and the implications for justice. For example, the algorithm COMPAS, which has been extensively used in assessing risk of recidivism and for sentencing in criminal cases, has been faulted for having racial bias. And, in State v. Loomis (2016), the Wisconsin Supreme Court rejected an individual claim of unfair sentencing because he (Loomis) was denied access and could not challenge COMPAS’ lack of transparency.
Defining fairness, determining the presence or absence of biases, how they might arise, and whether they can be detected or ameliorated is of direct concern. The speakers will discuss algorithmic fairness broadly as well as how these issues arise throughout the AI lifecycle in various application domains.
Agenda
The Forum will be a panel discussion with three prominent researchers who have written extensively on this subject. They are:
Kush Varshney
Distinguished Research Staff Member and Manager, IBM Research AI
Thomas J. Watson Research Center, Yorktown Heights, NY
Kristian Lum
Assistant Professor of Computer and Information Science
University of Pennsylvania
Alexandra Chouldechova
Estella Loomis McCandless Assistant Professor of Statistics and Public Policy
Heinz College, Carnegie-Mellon University
Moderator
Claire Kelling, Penn State University
Format
Each presenter will take 15 minutes to identify issues. This will be followed by a 45-60 minute discussion among the three along with questions from the audience, screened by the moderator.
About the Speakers
Alexandra Chouldechova received her Ph.D in Statistics from Stanford University. Her research investigates algorithmic fairness and accountability in data-driven decision-making systems, with a focus on criminal justice and human services. She is a member of the executive committee for the ACM Conference on Fairness, Accountability and Transparency (FAccT).
Kristian Lum is Research Assistant Professor in the CIS Department. Prior to coming to Penn, she was Lead Statistician at the Human Rights Data Analysis Group. She is widely known for her work on algorithmic fairness and predictive policing. Dr Lum has consulted for a number of city governments on policy issues and risk assessment, and she is a key organizer of the ACM FAccT (formerly FAT*) conferences.
Kush R. Varshney co-directs the IBM Science for Social Good initiative and leads the machine learning group in the Foundations of Trustworthy AI department. His research interests include considerations of machine learning beyond predictive accuracy in-cluding fairness, explainability, robustness, transparency, safety, and causality with applications to sustainable development.
Event Type
- NISS Hosted
- NISS Sponsored