ProG

We propose a multi-task prompting approach for graph models, which enables the smooth integration of NLP’s prompting concept into graph tasks.

All in One: Multi-Task Prompting for GNN

Fork it  github

YouTube Video

Bilibili Video

Abstract

Recently, “pre-training and fine-tuning” has been adopted as a standard workflow for many graph tasks since it can take general graph knowledge to relieve the lack of graph annotations from each application. However, graph tasks with node level, edge level, and graph level are far diversified, making the pre-training pretext often incompatible with these multiple tasks. This gap may even cause a “negative transfer” to the specific application, leading to poor results. Inspired by the prompt learning in natural language processing (NLP), which has presented significant effectiveness in leveraging prior knowledge for various NLP tasks, we study the prompting topic for graphs with the motivation of filling the gap between pre-trained models and various graph tasks. In this paper, we propose a novel multi-task prompting method for graph models. Specifically, we first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern. In this way, the prompting idea from NLP can be seamlessly introduced to the graph area. Then, to further narrow the gap between various graph tasks and state-of-the-art pre-training strategies, we further study the task space of various graph applications and re-formulate downstream problems to the graph-level task. Afterward, we introduce meta-learning to efficiently learn a better initialization for the multi-task prompt of graphs so that our prompting framework can be more reliable and general for different tasks. We conduct extensive experiments, results from which demonstrate the superiority of our method.

Citation

bibtex

@inproceedings{sun2023all,
	title = {All in {One}: {Multi}-{Task} {Prompting} for {Graph} {Neural} {Networks}},
	booktitle = {Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)},
	author = {Sun, Xiangguo and Cheng, Hong and Li, Jia and Liu, Bo and Guan, Jihong},
	year = {2023},
	pages = {2120--2131},
isbn = {9798400701030},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
location = {Long Beach, CA, USA},
series = {KDD '23}
url = {https://doi.org/10.1145/3580305.3599256},
doi = {10.1145/3580305.3599256}}

Contributions

Motivation

Intuitively, the above graph-level pre-training strategies have some intrinsic similarities with the language-masked prediction task: aligning two graph views generated by node/edge/feature mask or other perturbations is very similar to predicting some vacant “blanks” on graphs. That inspires us to further consider: why can’t we use a similar format prompt for graphs to improve the generalization of graph neural networks? Instead of fine-tuning a pre-trained model with an adaptive task head, prompt learning aims to reformulate input data to fit the pretext. Many effective prompt methods are firstly proposed in the NLP area, including some hand-crafted prompts like GPT-3, discrete prompts like, and trainable prompts in the continuous spaces like. Despite significant results achieved, prompt-based methods have been rarely introduced in graph domains yet. We only find very few works like GPPT, trying to design prompts for graphs. Unfortunately, most of them are very limited and are far from sufficient to meet the multi-task demands.

Conclusion

In this paper, we study the multi-task problem of graph prompts with few-shot settings. We propose a novel method to reformulate different-level tasks to unified ones and further design an effective prompt graph with a meta-learning technique. We extensively evaluate the performance of our method. Experiments demonstrate the effectiveness of our framework.