Disentangled Multiplex Graph Representation Learning
Cite this paper, related material.
- Download PDF
Search code, repositories, users, issues, pull requests...
Provide feedback.
We read every piece of feedback, and take your input very seriously.
Saved searches
Use saved searches to filter your results more quickly.
To see all available qualifiers, see our documentation .
- Notifications
YujieMo/DMG
- Python 100.0%
Hi, I’m Yujie Mo (莫宇杰 in Chinese). I am currently a second-year Ph.D. student in Computer Science and Technology at University of Electronic Science and Technology of China (UESTC), under the supervision of Prof. Xiaofeng Zhu . My research interests include graph representation learning and self-supervised/unsupervised learning. Previously, I got my bachelor’s degree in Computer Science and Technology at Northeastern University (Shenyang, China) in 2020. Later, I was admitted to UESTC for a Master’s degree in 2020 and transferred to pursue the Ph.D. degree in 2022. And now, I am a visiting Ph.D. student of Learning and Vision (LV) Lab at the National University of Singapore under the supervision of Prof. Xinchao Wang . Check my CV for more details.
Welcome to contacting me about research or internship by emails or WeChat (mujin20209)
- 2024.01 : 🎉🎉 One paper is accepted by ICLR 2024.
- 2023.04 : 🎉🎉 One paper is accepted by ICML 2023.
- 2023.04 : 🎉🎉 One paper is accepted by TKDE.
📝 Publications
* indicates equal contribution
Self-supervised Heterogeneous Graph Learning: a Homophily and Heterogeneity View ICLR 2024
Yujie Mo , Feiping Nie, Ping Hu, Heng Tao Shen, Zheng Zhang, Xinchao Wang, Xiaofeng Zhu
Disentangled Multiplex Graph Representation Learning ICML 2023
Yujie Mo , Yajie Lei, Jialie Shen, Xiaoshuang Shi, Heng Tao Shen, Xiaofeng Zhu
Multiplex Graph Representation Learning via Dual Correlation Reduction TKDE 2023
Yujie Mo , Yuhuan Chen, Yajie Lei, Liang Peng, Xiaoshuang Shi, Changan Yuan, Xiaofeng Zhu
Multiplex Graph Representation Learning via Common and Private Information Mining AAAI 2023
Yujie Mo *, Zongqian Wu*, Yuhuan Chen, Xiaoshuang Shi, Heng Tao Shen, Xiaofeng Zhu
Simple Self-supervised Multiplex Graph Representation Learning ACM MM 2022
Yujie Mo , Yuhuan Chen, Liang Peng, Xiaoshuang Shi, Xiaofeng Zhu
Simple unsupervised graph representation learning AAAI 2022
Yujie Mo *, Liang Peng*, Jie Xu, Xiaoshuang Shi, Xiaofeng Zhu
Self-Training based Few-Shot Node Classification by Knowledge Distillation. AAAI 2024
Zongqian Wu*, Yujie Mo *, Peng Zhou, Shangbo Yuan, Xiaofeng Zhu
GRLC: Graph representation learning with constraints TNNLS 2023
Liang Peng*, Yujie Mo *, Jie Xu, Jialie Shen, Xiaoshuang Shi, Xiaoxiao Li, Heng Tao Shen, Xiaofeng Zhu
🎖 Honors and Awards
- 2021.10 First-class Scholarship.
- 2022.04 “Academic Youth” Graduate Student Honor Award.
- 2023.04 “Academic Newcomer” Graduate Student Honor Award.
- 2023.04 Outstanding Graduate Teaching Assistant Award.
- 2023.04 Outstanding Graduate Student Cadre.
- 2023.10 National Scholarship.
- 2021.12 AAAI 2022 Student Scholarship.
- 2022.12 AAAI 2023 Travel Scholarship.
📖 Educations
- 2022.09 - (now) , University of Electronic Science and Technology of China, Chengdu, China, Ph.D. student of Computer Science and Technology.
- 2020.09 - 2022.06 , University of Electronic Science and Technology of China, Chengdu, China, Master of Computer Technology, transferred to Ph.D.
- 2016.09 - 2020.06 , Northeastern University, Shenyang, China, Bachelor of Computer Science and Technology.
- Program Committee Member for ICML 2024, KDD 2024, ICLR 2024, ECCV 2024, AAAI 2024-2023, ACM MM 2024-2023, NeurIPS 2023, etc.
- Reviewer for TNNLS, TKDE, TIP, IPM, etc.
Disentangled Multiplex Graph Representation Learning
Yujie mo · yajie lei · jialie shen · xiaoshuang shi · heng tao shen · xiaofeng zhu, exhibit hall 1 #506.
Unsupervised multiplex graph representation learning (UMGRL) has received increasing interest, but few works simultaneously focused on the common and private information extraction. In this paper, we argue that it is essential for conducting effective and robust UMGRL to extract complete and clean common information, as well as more-complementarity and less-noise private information. To achieve this, we first investigate disentangled representation learning for the multiplex graph to capture complete and clean common information, as well as design a contrastive constraint to preserve the complementarity and remove the noise in the private information. Moreover, we theoretically analyze that the common and private representations learned by our method are provably disentangled and contain more task-relevant and less task-irrelevant information to benefit downstream tasks. Extensive experiments verify the superiority of the proposed method in terms of different downstream tasks.
Disentangled Contrastive Learning on Graphs
Part of Advances in Neural Information Processing Systems 34 (NeurIPS 2021)
Haoyang Li, Xin Wang, Ziwei Zhang, Zehuan Yuan, Hang Li, Wenwu Zhu
Recently, self-supervised learning for graph neural networks (GNNs) has attracted considerable attention because of their notable successes in learning the representation of graph-structure data. However, the formation of a real-world graph typically arises from the highly complex interaction of many latent factors. The existing self-supervised learning methods for GNNs are inherently holistic and neglect the entanglement of the latent factors, resulting in the learned representations suboptimal for downstream tasks and difficult to be interpreted. Learning disentangled graph representations with self-supervised learning poses great challenges and remains largely ignored by the existing literature. In this paper, we introduce the Disentangled Graph Contrastive Learning (DGCL) method, which is able to learn disentangled graph-level representations with self-supervision. In particular, we first identify the latent factors of the input graph and derive its factorized representations. Each of the factorized representations describes a latent and disentangled aspect pertinent to a specific latent factor of the graph. Then we propose a novel factor-wise discrimination objective in a contrastive learning manner, which can force the factorized representations to independently reflect the expressive information from different latent factors. Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of our method against several state-of-the-art baselines.
Name Change Policy
Requests for name changes in the electronic proceedings will be accepted with no questions asked. However name changes may cause bibliographic tracking issues. Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings.
Use the "Report an Issue" link to request a name change.
Disentangled Contrastive Learning on Graphs
Haoyang li , xin wang , ziwei zhang , zehuan yuan , hang li , wenwu zhu, send feedback.
Enter your feedback below and we'll get back to you as soon as possible. To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Report an issue
BibTeX Record
Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure
Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track
Shaohua Fan, Xiao Wang, Yanhu Mo, Chuan Shi, Jian Tang
Most Graph Neural Networks (GNNs) predict the labels of unseen graphs by learning the correlation between the input graphs and labels. However, by presenting a graph classification investigation on the training graphs with severe bias, surprisingly, we discover that GNNs always tend to explore the spurious correlations to make decision, even if the causal correlation always exists. This implies that existing GNNs trained on such biased datasets will suffer from poor generalization capability. By analyzing this problem in a causal view, we find that disentangling and decorrelating the causal and bias latent variables from the biased graphs are both crucial for debiasing. Inspired by this, we propose a general disentangled GNN framework to learn the causal substructure and bias substructure, respectively. Particularly, we design a parameterized edge mask generator to explicitly split the input graph into causal and bias subgraphs. Then two GNN modules supervised by causal/bias-aware loss functions respectively are trained to encode causal and bias subgraphs into their corresponding representations. With the disentangled representations, we synthesize the counterfactual unbiased training samples to further decorrelate causal and bias variables. Moreover, to better benchmark the severe bias problem, we construct three new graph datasets, which have controllable bias degrees and are easier to visualize and explain. Experimental results well demonstrate that our approach achieves superior generalization performance over existing baselines. Furthermore, owing to the learned edge mask, the proposed model has appealing interpretability and transferability.
Name Change Policy
Requests for name changes in the electronic proceedings will be accepted with no questions asked. However name changes may cause bibliographic tracking issues. Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings.
Use the "Report an Issue" link to request a name change.
Disentangled Graph Representation with Contrastive Learning for Rumor Detection
Ieee account.
- Change Username/Password
- Update Address
Purchase Details
- Payment Options
- Order History
- View Purchased Documents
Profile Information
- Communications Preferences
- Profession and Education
- Technical Interests
- US & Canada: +1 800 678 4333
- Worldwide: +1 732 981 0060
- Contact & Support
- About IEEE Xplore
- Accessibility
- Terms of Use
- Nondiscrimination Policy
- Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
Subscribe to the PwC Newsletter
Join the community, edit social preview.
Add a new code entry for this paper
Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row, remove a task, add a method, remove a method, edit datasets, mpxgat: an attention based deep learning model for multiplex graphs embedding.
28 Mar 2024 · Marco Bongiovanni , Luca Gallo , Roberto Grasso , Alfredo Pulvirenti · Edit social preview
Graph representation learning has rapidly emerged as a pivotal field of study. Despite its growing popularity, the majority of research has been confined to embedding single-layer graphs, which fall short in representing complex systems with multifaceted relationships. To bridge this gap, we introduce MPXGAT, an innovative attention-based deep learning model tailored to multiplex graph embedding. Leveraging the robustness of Graph Attention Networks (GATs), MPXGAT captures the structure of multiplex networks by harnessing both intra-layer and inter-layer connections. This exploitation facilitates accurate link prediction within and across the network's multiple layers. Our comprehensive experimental evaluation, conducted on various benchmark datasets, confirms that MPXGAT consistently outperforms state-of-the-art competing algorithms.
Code Edit Add Remove Mark official
Tasks edit add remove, datasets edit, results from the paper edit add remove, methods edit add remove.
Disentangled Partial Label Learning
- Wei-Xuan Bao School of Computer Science and Engineering, Southeast University, Nanjing, China Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China
- Yong Rui Lenovo Research, Lenovo Group Ltd., Beijing, China
- Min-Ling Zhang School of Computer Science and Engineering, Southeast University, Nanjing, China Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China
- Video/Poster
How to Cite
- Endnote/Zotero/Mendeley (RIS)
Information
- For Readers
- For Authors
- For Librarians
Developed By
Subscription.
Login to access subscriber-only resources.
Part of the PKP Publishing Services Network
Copyright © 2024, Association for the Advancement of Artificial Intelligence
Help | Advanced Search
Computer Science > Information Retrieval
Title: dual-channel multiplex graph neural networks for recommendation.
Abstract: Efficient recommender systems play a crucial role in accurately capturing user and item attributes that mirror individual preferences. Some existing recommendation techniques have started to shift their focus towards modeling various types of interaction relations between users and items in real-world recommendation scenarios, such as clicks, marking favorites, and purchases on online shopping platforms. Nevertheless, these approaches still grapple with two significant shortcomings: (1) Insufficient modeling and exploitation of the impact of various behavior patterns formed by multiplex relations between users and items on representation learning, and (2) ignoring the effect of different relations in the behavior patterns on the target relation in recommender system scenarios. In this study, we introduce a novel recommendation framework, Dual-Channel Multiplex Graph Neural Network (DCMGNN), which addresses the aforementioned challenges. It incorporates an explicit behavior pattern representation learner to capture the behavior patterns composed of multiplex user-item interaction relations, and includes a relation chain representation learning and a relation chain-aware encoder to discover the impact of various auxiliary relations on the target relation, the dependencies between different relations, and mine the appropriate order of relations in a behavior pattern. Extensive experiments on three real-world datasets demonstrate that our \model surpasses various state-of-the-art recommendation methods. It outperforms the best baselines by 10.06\% and 12.15\% on average across all datasets in terms of R@10 and N@10 respectively.
Submission history
Access paper:.
- HTML (experimental)
- Other Formats
References & Citations
- Google Scholar
- Semantic Scholar
BibTeX formatted citation
Bibliographic and Citation Tools
Code, data and media associated with this article, recommenders and search tools.
- Institution
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
IMAGES
VIDEO
COMMENTS
vestigate a new unsupervised framework, i.e., Disentangled Multiplex Graph representation learning (DMG for brevity), to conduct effective and robust UMGRL, as shown in Figure 1. To do this, we first decouple the common and private representations by designing a new disentangled represen-tation learning for the multiplex graph to extract complete
A paper that proposes a method to extract complete and clean common and private information from multiplex graphs using disentangled representation learning. The method is theoretically and experimentally verified to be superior for different downstream tasks.
Unsupervised multiplex graph representation learning (UMGRL) has received increasing interest, but few works simultaneously focused on the common and private information extraction. ... To achieve this, we first investigate disentangled representation learning for the multiplex graph to capture complete and clean common information, as well as ...
Contribute to YujieMo/DMG development by creating an account on GitHub. @InProceedings{Mo_ICML_2023, title={Disentangled Multiplex Graph Representation Learning}, booktitle={Proceedings of the 40st International Conference on Machine Learning}, author={Mo, Yujie and Lei, Yajie and Shen, Jialie and Shi, Xiaoshuang and Shen, Heng Tao and Zhu, Xiaofeng}, year={2023} }
This paper argues that it is essential for conducting effective and robust UMGRL to extract complete and clean common information, as well as more-complementarity and less-noise private information, and investigates disentangled representation learning for the multiplex graph to capture complete andclean common information. Unsupervised multiplex graph representation learning (UMGRL) has ...
Disentangled Multiplex Graph Representation Learning ICML 2023. Yujie Mo, Yajie Lei, Jialie Shen, Xiaoshuang Shi, Heng Tao Shen, Xiaofeng Zhu. Multiplex Graph Representation Learning via Dual Correlation Reduction TKDE 2023. Yujie Mo, Yuhuan Chen, Yajie Lei, Liang Peng, Xiaoshuang Shi, Changan Yuan, Xiaofeng Zhu
Unsupervised multiplex graph representation learning (UMGRL) has received increasing interest, but few works simultaneously focused on the common and private information extraction. In this paper, we argue that it is essential for conducting effective and robust UMGRL to extract complete and clean common information, as well as more ...
Unsupervised multiplex graph representation learning (UMGRL) has received increasing interest, but few works simultaneously focused on the common and private information extraction. In this paper, we argue that it is essential for conducting effective and robust UMGRL to extract complete and clean common information, as well as more-complementarity and less-noise private information.
This paper comprehensively reviews the concept, methods and applications of disentangled representation learning (DRL), a learning strategy that aims to identify and separate the underlying factors of variation in data. It covers DRL based on intuitive definition and group theory definition, and four categories of methodologies, such as statistical, variational, generative and hierarchical.
Self-supervised multiplex graph representation learning (SMGRL) aims to capture the information from the multiplex graph, and generates discriminative embedding without labels. ... Yazhou Ren, Huayi Tang, Xiaorong Pu, Xiaofeng Zhu, Ming Zeng, and Lifang He. 2021a. Multi-VAE: Learning Disentangled View-Common and View-Peculiar Visual ...
UESTC & NUS - Cited by 293 - Graph representation learning - Self-supervised learning ... Disentangled Multiplex Graph Representation Learning. Y Mo, Y Lei, J Shen, X Shi, HT Shen, X Zhu. Proceedings of the 40st International Conference on Machine Learning, 2023. 7: 2023:
Text-attributed graphs (TAGs) have gained significant attention in the field of graph machine learning in recent years (Zhao et al., 2022; Huang et al., 2023; Duan et al., 2023). A TAG is a type of graphs where each node is associated with a text attribute. This representation captures the rich semantic relationships and dependencies among ...
for disentangled contrastive representation learning on graphs. To tackle these challenges, we propose a novel disentangled graph contrastive learning model (DGCL) capable of disentangled contrastive learning on graphs. In particular, we first design a disentangled graph encoder whose key ingredient is a multi-channel message-passing layer.
Learning Graph-based Disentangled Representations for Next POI Recommendation. Pages 1154-1163. ... Chang Zhou, Peng Cui, Hongxia Yang, and Wenwu Zhu. 2019. Learning disentangled representations for recommendation. arXiv preprint arXiv:1910.14238 (2019). Google Scholar; Andriy Mnih and Russ R Salakhutdinov. 2008. Probabilistic matrix ...
In this paper, we introduce the Disentangled Graph Contrastive Learning (DGCL) method, which is able to learn disentangled graph-level representations with self-supervision. In particular, we first identify the latent factors of the input graph and derive its factorized representations. Each of the factorized representations describes a latent ...
Graph embedding (GE) methods embed nodes (and/or edges) in graph into a low-dimensional semantic space, and have shown its effectiveness in modeling multi-relational data. However, existing GE models are not practical in real-world applications since it overlooked the streaming nature of incoming data. To address this issue, we study the problem of continual graph representation learning which ...
Edit social preview. The (variational) graph auto-encoder is extensively employed for learning representations of graph-structured data. However, the formation of real-world graphs is a complex and heterogeneous process influenced by latent factors. Existing encoders are fundamentally holistic, neglecting the entanglement of latent factors.
disentangled representation g(x) = zˆ ∈Z⊆ˆ RM, which generalizes well to unseen tasks. We learn this representation gby imposing the sparse sufficiency and minimality inductive biases. 2.1 Learning sparse and shared features Our architecture (see Figure 1) is composed of a backbone module g θthat is shared across all tasks
To solve the above problems, in this paper, we propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed. We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations ...
In this paper, we introduce the Disentangled Graph Contrastive Learning (DGCL) method, which is able to learn disentangled graph-level representations with self-supervision. In particular, we first identify the latent factors of the input graph and derive its factorized representations. Each of the factorized representations describes a latent ...
To this end, we propose a Graph Disentangled Contrastive framework for CDR (GDCCDR) with personalized transfer by meta-networks. An adaptive parameter-free filter is proposed to gauge the significance of diverse interactions, thereby facilitating more refined disentangled representations.
With the disentangled representations, we synthesize the counterfactual unbiased training samples to further decorrelate causal and bias variables. Moreover, to better benchmark the severe bias problem, we construct three new graph datasets, which have controllable bias degrees and are easier to visualize and explain.
With many social problems nowadays, rumor detection in social media has become increasingly important. Previous works proposed classical and deep learning methods to extract information from features or rumor propagation structures. However, these methods either require lots of labeled data or are disturbed by noise nodes easily. To address these challenges, we propose a novel method that ...
Graph representation learning has rapidly emerged as a pivotal field of study. Despite its growing popularity, the majority of research has been confined to embedding single-layer graphs, which fall short in representing complex systems with multifaceted relationships. ... an innovative attention-based deep learning model tailored to multiplex ...
Though learning disentangled representation is expected to facilitate label disambiguation for partial-label (PL) examples, few existing works were dedicated to addressing this issue. In this paper, we make the first attempt towards disentangled PLL and propose a novel approach named TERIAL, which makes predictions according to derived ...
Text-attributed graphs (TAGs) are prevalent on the web and research over TAGs such as citation networks, e-commerce networks and social networks has attracted considerable attention in the web community. Recently, large language models (LLMs) have demonstrated exceptional capabilities across a wide range of tasks. However, the existing works focus on harnessing the potential of LLMs solely ...
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, NOVEMBER 2022 1 Disentangled Representation Learning Xin Wang, Member, IEEE, Hong Chen, Si'ao Tang, Zihao Wu and Wenwu Zhu, Fellow, IEEE Abstract—Disentangled Representation Learning (DRL) aims to learn a model capable of identifying and disentangling the underlying factors hidden in the observable data in representation form.
The (variational) graph auto-encoder is extensively employed for learning representations of graph-structured data. However, the formation of real-world graphs is a complex and heterogeneous process influenced by latent factors. Existing encoders are fundamentally holistic, neglecting the entanglement of latent factors. This not only makes graph analysis tasks less effective but also makes it ...
Graph representation learning has rapidly emerged as a pivotal field of study. Despite its growing popularity, the majority of research has been confined to embedding single-layer graphs, which fall short in representing complex systems with multifaceted relationships. To bridge this gap, we introduce MPXGAT, an innovative attention-based deep learning model tailored to multiplex graph ...
Download a PDF of the paper titled Dual-Channel Multiplex Graph Neural Networks for Recommendation, by Xiang Li and 6 other authors. Download PDF HTML ... and includes a relation chain representation learning and a relation chain-aware encoder to discover the impact of various auxiliary relations on the target relation, the dependencies between ...