Graph Prompting for Graph Learning Models: Recent Advances and Future Directions

Time: Monday, August 4, 1-4pm
Location: Toronto Convention Centre, Toronto, Canda
Survey paper
Abstract
Graph learning models have demonstrated great prowess in learning expressive representations from large-scale graph data in a wide variety of real-world scenarios.
As a prevalent strategy for training powerful graph learning models, the "pre-training, adaptation" scheme first pre-trains graph learning models on unlabeled graph data in a self-supervised manner and then adapts them to specific downstream tasks.
During the adaptation phase, graph prompting emerges as a promising approach that learns trainable prompts while keeping the pre-trained graph learning models unchanged.
This tutorial will conver recent advancements in graph prompting including
Representative graph pre-training methods that serve as the foundation step of graph prompting
Mainstream techniques in graph prompting and elaborate on how they design learnable prompts for graph prompting
Real-world applications of graph prompting from different domains
Open challenges in existing studies with promising future directions in this field
Presenters
Organizers
Xingbo Fu, PhD student, University of Virginia
Zehong Wang, PhD student, University of Notre Dame
Zihan Chen, PhD student, University of Virginia
Jiazheng Li, PhD student, University of Connecticut
Yaochen Zhu, PhD student, University of Virginia
Zhenyu Lei, PhD student, University of Virginia
Cong Shen, Associate Professor, University of Virginia
Yanfang Ye, Galassi Family Collegiate Professor, University of Notre Dame
Chuxu Zhang, Associate Professor, University of Connecticut
Jundong Li, Associate Professor, University of Virginia