Speaker: Yushun Dong

Date: Nov 1, 2:15 – 3:05 pm

Abstract: Graph learning algorithms have been increasingly deployed in a plethora of real-world applications, such as epidemic analysis, healthcare, and financial analysis. Nevertheless, there has been a rise in societal concerns about the algorithmic bias that these algorithms may exhibit. In certain high-stakes applications of graph learning algorithms, such as healthcare and criminal justice, decisions heavily rely on the algorithmic output predictions, and life-changing decisions could be made for the involved individuals. Therefore, the potential algorithmic bias could lead to serious consequences, e.g., marginalizing those underrepresented demographic subgroups and hurting the benefit of disadvantaged communities. As such, there is an urgent need to develop responsible graph machine learning algorithms to facilitate fairness-aware predictions. However, properly handling such a task is non-trivial, and a series of challenges remain to be solved.

In this talk, I will present my exploration to address the fundamental challenges in model explanation and bias mitigation for fair graph learning algorithms. First, I will focus on group fairness and explore how each node in the graph data influences the bias level during the optimization of graph learning models. Second, I will move on to individual fairness and discuss
how to improve the fairness level of common graph learning algorithms. In the end, I will conclude the talk with future research directions of developing responsible graph machine learning algorithms and deploying them to benefit social good in real-world applications.

Biographical Sketch: Dr. Yushun Dong is an Assistant Professor in the Department of Computer Science at Florida State University. He received his Ph.D. degree in Electrical Engineering from the University of Virginia in 2024. His research interest lies in developing responsible graph learning algorithms to facilitate inclusive decision-making, and his research works span multiple high-impact areas such as deep learning explainability, algorithmic bias mitigation, safety, and applications of responsible learning algorithms including healthcare and public policy. These research works have been published in high-impact venues including SIGKDD, WWW, and AAAI. He is also the first author of the open-source Python library PyGDebias, which aims to help practitioners mitigate bias for the commonly used graph learning algorithms in applications. He is the recipient of multiple prestigious awards, such as the Louis T. Rader Graduate Research Award, Endowed Fellowship, and Best Poster (Runner-Up) at the Doctoral Forum of SDM 2022.

Location and Zoom link: 307 Love, or https://fsu.zoom.us/j/715375121