Despite the rising popularity of message-passing neural networks (MPNNs), their ability to fit complex functions over graphs is limited as node representations become more similar with increasing depth—a phenomenon known as over-smoothing. Most approaches to mitigate over-smoothing extend common message-passing schemes, e.g., the graph convolutional network, by utilizing residual connections, gating mechanisms, normalization, or regularization techniques. Our work contrarily proposes to operate MPNNs on multiple computational graphs. We show that operating on a graph with no ergodic components, i.e., a directed acyclic graph (DAG), prevents over-smoothing. Each DAG amplifies a different signal in the data, allowing their combination to amplify multiple signals simultaneously and prevent representational rank collapse. Based on these insights, we propose DA-MPNNs, a general framework that splits any given graph into three computational graphs based on a strict partial order of the nodes. We conduct comprehensive experiments that confirm the computational benefits of DA-MPNNs, leading to further improvements of state-of-the-art MPNNs.

Zitation:

Roth, A., Bause, F., Kriege, N. M., & Liebig, T. (2024). Message-passing on directed acyclic graphs prevents over-smoothing. In Proceedings of the 21st International Workshop on Mining and Learning with Graphs (MLG@ECML-PKDD 2024). https://mlg-europe.github.io/2024/papers/35/Submission/DA_MPNNs_MLG2024.pdf