Graph Prompting for Graph Learning Models: Recent Advances and Future Directions



Abstract

Graph learning models have demonstrated great prowess in learning expressive representations from large-scale graph data in a wide variety of real-world scenarios. As a prevalent strategy for training powerful graph learning models, the "pre-training, adaptation" scheme first pre-trains graph learning models on unlabeled graph data in a self-supervised manner and then adapts them to specific downstream tasks. During the adaptation phase, graph prompting emerges as a promising approach that learns trainable prompts while keeping the pre-trained graph learning models unchanged. This tutorial will conver recent advancements in graph prompting including

  • Representative graph pre-training methods that serve as the foundation step of graph prompting
  • Minstream techniques in graph prompting and elaborate on how they design learnable prompts for graph prompting
  • Real-world applications of graph prompting from different domains
  • Open challenges in existing studies with promising future directions in this field
  • Presenters


    Xingbo Fu
    University of Virginia
    Zehong Wang
    University of Notre Dame
    TBD
    Institution

    Organizers

  • Xingbo Fu, PhD student, University of Virginia
  • Zehong Wang, PhD student, University of Notre Dame
  • Zihan Chen, PhD student, University of Virginia
  • Jiazheng Li, PhD student, University of Connecticut
  • Yaochen Zhu, PhD student, University of Virginia
  • Zhenyu Lei, PhD student, University of Virginia
  • Cong Shen, Associate Professor, University of Virginia
  • Yanfang Ye, Galassi Family Collegiate Professor, University of Notre Dame
  • Chuxu Zhang, Associate Professor, University of Connecticut
  • Jundong Li, Assistant Professor, University of Virginia