Recommender systems have gained prominence in various applications, with deep neural network-based algorithms displaying impressive capabilities. Large language models (LLMs) have recently demonstrated their proficiency in multiple tasks, prompting researchers to explore their potential in recommender systems. However, two main challenges hinder the adoption of LLMs: high computational requirements and neglect of collaborative signals. Recent studies have focused on semantic alignment methods to transfer knowledge from LLMs to collaborative models. However, a significant semantic gap remains due to the diverse nature of interaction data in collaborative models compared to the natural language used in LLMs. Attempts to bridge this gap through contrastive learning have shown limitations, potentially introducing noise and degrading recommendation performance.
Graph neural networks (GNNs) have gained prominence in recommender systems, particularly for collaborative filtering. Methods such as LightGCN, NGCF, and GCCF use GNNs to model user-item interactions, but face challenges from noisy implicit feedback. To mitigate this, self-supervised learning techniques such as contrastive learning have been employed, with approaches such as SGL, LightGCL, and NCL showing improved robustness and performance. LLMs have sparked interest in recommendations, with researchers exploring ways to integrate their powerful representation capabilities. Studies such as RLMRec, ControlRec, and CTRL use contrastive learning to align collaborative filtering embeddings with LLM semantic representations.
Researchers from the National University of Defense technology, Changsha, Baidu Inc, Beijing and the Key Laboratory of Anhui Province of the University of Science and technology of China presented a Disentangled alignment framework for recommendation model and LLM (DaRec), A unique plug-and-play framework that addresses limitations in integrating LLM with recommender systems. Motivated by theoretical findings, it aligns semantic knowledge through disentangled representation rather than exact alignment. The framework consists of three key components: (1) disentangling representations into shared and specific components to reduce noise, (2) employing uniformity and orthogonal loss to maintain representation informativeness, and (3) implementing a structural alignment strategy at local and global levels for effective semantic knowledge transfer.
DaRec is an innovative framework for aligning semantic knowledge between LLMs and collaborative models in recommender systems. This approach is motivated by theoretical findings suggesting that exact alignment of representations may be suboptimal. DaRec consists of three main components:
- Representation disentangling: The framework separates representations into shared and specific components for collaborative models and LLMs. This reduces the negative impact of specific information that can introduce noise during alignment.
- Uniformity and Orthogonal Constraints: DaRec employs uniformity functions and orthogonal loss to maintain the informativeness of representations and ensure unique and complementary information in specific and shared components.
- Structure Alignment Strategy: The framework implements a two-tier alignment approach:
- Global Structure Alignment: Aligns the overall structure of shared representations.
- Local structure alignment: Uses clustering to identify preference centers and adaptively aligns them.
DaRec aims to overcome the limitations of previous methods by providing a more flexible and effective alignment strategy, potentially improving the performance of LLM-based recommender systems.
DaRec outperformed both traditional collaborative filtering methods and LLM-enhanced recommendation approaches on three datasets (amazon-book, Yelp, Steam) across multiple metrics (Recall@K, NDCG@K). For example, on the Yelp dataset, DaRec outperformed the second-best method (AutoCF) by 3.85%, 1.57%, 3.15%, and 2.07% in R@5, R@10, N@5, and N@10 respectively.
Hyperparameter analysis revealed optimal performance with a cluster number K in the range (4,8), a trade-off parameter λ in the range (0.1, 1.0), and a sample size N̂ at 4096. Extreme values of these parameters resulted in a decrease in performance.
The t-SNE visualization demonstrated that DaRec successfully captured the interest clusters underlying users' preferences.
Overall, DaRec showed superior performance over existing methods, demonstrating robustness across multiple hyperparameter values and effectively capturing user-interested structures.
This research presents DaRec, a unique plug-and-play framework for aligning collaborative models and LLMs in recommender systems. Based on a theoretical analysis showing that zero-gap alignment may not be optimal, DaRec disentangles representations into shared and specific components. It implements a dual-level structure alignment strategy at the global and local levels. The authors provide theoretical proof that their method produces representations with more relevant and less irrelevant information for recommendation tasks. Extensive experiments on benchmark datasets demonstrate DaRec’s superior performance over existing methods, representing a significant advance in integrating LLMs with collaborative filtering models.
Take a look at the Paper. All credit for this research goes to the researchers of this project. Also, don't forget to follow us on twitter.com/Marktechpost”>twitter and join our Telegram Channel and LinkedIn GrAbove!. If you like our work, you will love our fact sheet..
Don't forget to join our Over 49,000 ML subscribers on Reddit
Find upcoming ai webinars here
Asjad is a consultant intern at Marktechpost. He is pursuing Bachelors in Mechanical Engineering from Indian Institute of technology, Kharagpur. Asjad is a Machine Learning and Deep Learning enthusiast who is always researching the applications of Machine Learning in the healthcare domain.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>