The capacity to infer user preferences from past behaviors is crucial for effective personalized suggestions. The fact that many products don’t have star ratings makes this task exponentially more challenging. Past actions are generally interpreted in a binary form to indicate whether or not a user has interacted with a certain object in the past. Additional assumptions must be made based on this binary data to deduce the users’ preferences from such covert input.
It’s reasonable to assume that viewers enjoy the content with which they’ve engaged and dismiss the content that hasn’t piqued their attention. This assumption, however, is rarely correct in actual use. It’s possible that a consumer isn’t engaging with a product because they are unaware it even exists. Therefore, it is more plausible to assume that users simply ignore or don’t care about the aspects that can’t be interacted with.
Studies have assumed that the tendency to favor products with which one is already familiar over those with which one is not. This idea formed the basis for Bayesian Personalized Ranking (BPR), a technique for making tailored recommendations. In BPR, the data is transformed into a three-dimensional binary tensor called D, where the first dimension represents the users.
A new Apple study created a variant of the popular basic product rating (BPR) model that does not rely on transitivity. For generalization, they propose an alternative tensor decomposition. They introduce Sliced Anti-symmetric Decomposition (SAD), a novel implicit-feedback-based model for collaborative filtering. Using a novel three-way tensor perspective of user-item interactions, SAD adds one more latent vector to each item, unlike conventional methods that estimate a latent representation of users (user vectors) and items (item vectors). To produce interactions between items when evaluating relative preferences, this new vector generalizes the preferences derived by regular dot products to generic inner products. When the vector collapses to 1, SAD becomes a state-of-the-art (SOTA) collaborative filtering model; in this research, we permit its value to be determined from data. The decision to allow the new item vector’s values to exceed 1 has far-reaching consequences. The existence of cycles in pairwise comparisons is interpreted as evidence that users’ mental models are not linear.
The team presents a quick group coordinate descent method for SAD parameter estimation. Simple stochastic gradient descent (SGD) is used to obtain accurate parameter estimations rapidly. Using a simulated study, they first demonstrate the efficacy of SGD and the expressiveness of SAD. Then, utilizing the trio above of freely available resources, they pit SAD against seven alternative SOTA recommendation models. This work also shows that by incorporating previously ignored data and relationships between entities, the updated model provides more reliable and accurate results.
For this work, the researchers refer to collaborative filterings as implicit feedback. However, the applications of SAD are not limited to the aforementioned data types. Datasets with explicit ratings, for instance, contain partial orders that can be used immediately during model fitting, as opposed to the current practice of evaluating model consistency post hoc.
Check out the Paper and Github link. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.