Cognitive biases, once considered flaws in human decision-making, are now recognized for their potential positive impact on learning and decision-making. However, in machine learning, especially in search and classification systems, the study of cognitive biases still needs to be improved. Most of the focus in information retrieval is on detecting biases and assessing their effect on search behavior, despite several research studies focused on exploring how these biases can influence model training and the ethical behavior of machines. This poses a challenge in using these cognitive biases to improve retrieval algorithms, an area largely unexplored but offering both opportunities and challenges for researchers.
Existing approaches, such as recommender systems research, have explored some psychologically rooted human biases, such as primacy and recency effects in peer recommendations and risk aversion and decision biases in product recommendations. However, a detailed study of cognitive biases in recommendation has not yet been explored. The field does not have any systematic research on how these biases appear at different stages of the recommendation process. This gap is surprising considering that recommender systems research has often been influenced by psychological theories, models, and empirical evidence about human decision making. It represents a significant missed opportunity to use cognitive biases to improve recommendation algorithms and user experiences.
Researchers from Johannes Kepler University Linz and the Linz Institute of technology (Austria) have proposed a comprehensive approach to examine cognitive biases within the recommendation ecosystem. This groundbreaking research investigates the possible evidence of these biases at different stages of the recommendation process and from the perspective of different stakeholders. The researchers took the first steps towards understanding the complex interplay between cognitive biases and recommendation systems. User and item models were improved by evaluating and using the positive effects of these biases, leading to better performing recommendation algorithms and higher user satisfaction.
Research is conducted on cognitive biases in recommender systems. The positive feature effect (FPE) in job recommendation systems is analyzed using a dataset of 272 job advertisements and 336 applicants across 6 categories. A trained recommender system model is used to predict matches between candidates and job advertisements, resulting in 13,607 true positive predictions and 1,625 false negative predictions. This analysis aimed to understand how FPE affects job recommendations. Furthermore, the Ikea effect is analyzed through a Prolific platform, including 100 American participants who use music streaming services. Participants answered 4 statements on a Likert-5 scale, evaluating their habits in creating, editing, and consuming music collections.
The results obtained for FPE show that removing adjectives from job descriptions increased false negative predictions, highlighting the crucial role of descriptive language in the accuracy of job recommendations. Relevance scores improve for 52.0% of the false negative samples, and 12.9% become true positives when using unique adjectives from high-recall job ads. As for the Ikea Effect, 48 out of 88 participants reported consuming their playlists more frequently than others, with an average difference of 0.65 (SD = 1.52) in consumption frequency. This preference for self-created content suggests the presence of the Ikea Effect in music recommendation systems.
In summary, researchers have introduced a detailed approach to examine cognitive biases within the recommendation ecosystem. This paper demonstrates the presence and impact of cognitive biases such as the positive feature effect (FPE), the Ikea effect, and cultural homophily in recommendation systems. These investigations provide the basis for further exploration in this promising field. The study highlights the importance of equipping recommendation system researchers and practitioners to gain a deep understanding of cognitive biases and their potential effects throughout the recommendation process.
Take a look at the Paper. All credit for this research goes to the researchers of this project. Also, don't forget to follow us on twitter.com/Marktechpost”>twitter and join our Telegram Channel and LinkedIn GrAbove!. If you like our work, you will love our fact sheet..
Don't forget to join our SubReddit of over 50,000 ml
Below is a highly recommended webinar from our sponsor: ai/webinar-nvidia-nims-and-haystack?utm_campaign=2409-campaign-nvidia-nims-and-haystack-&utm_source=marktechpost&utm_medium=banner-ad-desktop” target=”_blank” rel=”noreferrer noopener”>'Developing High-Performance ai Applications with NVIDIA NIM and Haystack'
Sajjad Ansari is a final year student from IIT Kharagpur. As a technology enthusiast, he delves into practical applications of ai, focusing on understanding the impact of ai technologies and their real-world implications. He aims to articulate complex ai concepts in a clear and accessible manner.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>