Today, with millions of news from various sources, many newsreaders access news online. News recommendation systems (NRS) suggest selected news articles that would be interesting to news readers to help users find the appropriate and relevant content and thus eliminate the rest of the pieces, which in turn alleviates the information overload problem. The recommendation system extracts news from many articles and filters it based on the reader’s interests by learning from users’ past experiences.
Filter bubbles, however, can result from recommendations that overemphasize the interests of users. The filter bubble occurs when recommendations based on user interests, such as search logs and access logs, result in recommending only the information the user wants to see, isolating them from information they don’t want to see or don’t want to see. lines up. with his ideas and isolating his beliefs and values as a “bubble”. Pariser defined the filter bubble in 2011. Since then, it has generated a lot of discussion about news recommendations and is one of the most crucial topics.
Although many approaches for news recommendation have already been presented, models based on deep learning have recently been observed to work very well. Many current techniques for deep learning-based news recommendation rely on Attention, which learns from past click logs to acquire representations (vectors) of users and news, and predicts click-through rates for unidentified items. Recently, user and content representations have performed better by adopting pre-trained language models like BERT. Therefore, in recent years there has been a lot of discussion about topics such as increasing diversity and news recommender systems. There has yet to be a proposal for news recommendations that emphasize the diversity of political opinion, particularly the disparities in political opinion between issues.
The main problem with the news recommendation system is that it can skew readers’ political views based on their past activities. The political divide between liberals and conservatives may be due to biases against political positions in the results of the recommendations. Thus, a group of researchers published an article on Reduction of political homogenization between topics in content-based news recommendations. To combat filter bubbles in news suggestions, this research suggests a novel strategy focused on political ideologies. Results for news recommendations based on user interests are skewed for various reasons, including emotional polarity and article content. In this study, liberal or conservative prejudices and political positions were especially considered.
Two different types of attention-based deep learning models were presented in this paper. The first is an objective function that penalizes terms describing political ideologies, whether liberal or conservative, that the authors have independently collected to be less likely to affect the prediction outcome. The alternative strategy is to give more weight to topic-specific words. A technique that incorporates both senses was also put to the test.
In their proposed approach, the team considered text recommendation as a simple binary rating of whether the reader likes the article or not. Articles are tagged into two categories, “user prefers conservative articles in category 1 and liberal articles in category 2.” A data set of 900,000 news from 41 different news websites was used in the experiment obtained from Liu et al. These news have five levels of political positions with the labels
{-2,-1,0,1,2} on them. The scale goes from -2 (more liberal) to +2 (more conservative). 100,000 samples of data collection are used in this research. Political stances instead of topics were tagged in the 100,000 collected news stories. Therefore, unsupervised clustering was used to extract the subjects for this research.
The team tested the suggested method on a dataset of users who held opposing political views on two issues and found that it outperformed both baseline and STN and STAN. A major problem in the realm of recommender systems is filter bubbles. It is anticipated that real-world implementations of recommender systems that take this range of political viewpoints into account will be developed in the future.
review the Paper and Reference article. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 13k+ ML SubReddit, discord channel, and electronic newsletterwhere we share the latest AI research news, exciting AI projects, and more.
Niharika is a technical consulting intern at Marktechpost. She is a third year student, currently pursuing her B.Tech from the Indian Institute of Technology (IIT), Kharagpur. She is a very enthusiastic individual with a strong interest in machine learning, data science, and artificial intelligence and an avid reader of the latest developments in these fields.