Many learning tasks in Artificial Intelligence require dealing with graph data, ranging from biology and chemistry to finance and education. As powerful learning tools for graph inputs, graph neural networks (GNNs) have demonstrated remarkable performance in various applications. Despite their success, unlocking the full potential of GNNs requires tackling the limitations of robustness and scalability. In this talk, I will present a fresh perspective on enhancing GNNs by optimizing the graph data, rather than designing new models. Specifically, first, I will present a model-agnostic framework which improves prediction performance by enhancing the quality of an imperfect input graph. Then I will show how to significantly reduce the size of a graph dataset while preserving sufficient information for GNN training.
Date
Location
SAGE 3510
Speaker:
Wei Jin
from Michigan State University