Neural Encoding and Decoding at Scale
Published in ICML, 2025
title: “Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting” collection: publications permalink: /publication/llm-bias-recsys excerpt: ‘Large Language Models (LLMs) are increasingly being used to power recommender systems, especially in cold start scenarios where traditional methods struggle with limited data. However, little is known about the potential biases these systems introduce when recommendations are generated directly from pretrained LLMs. In this work, we systematically analyze biases in LLM-based recommender systems across multiple cold start settings. We show that despite strong performance on relevance, these systems exhibit systematic demographic and content biases, which can propagate unfair outcomes. Our study highlights both risks and opportunities, providing insights into how LLM-based recommendation methods should be evaluated and deployed responsibly.’ date: 2025-08-01 venue: ‘arXiv’ paperurl: ‘https://arxiv.org/abs/2508.20401’ citation: ‘Andre, A., Roy, G., Dyer, E., & Wang, K. (2025). Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting. arXiv:2508.20401.’ category: workshop —
Recommended citation: Zhang, Y., Wang, Y., Azabou, M., Andre, A., Wang, Z., Lyu, H., The International Brain Laboratory, Dyer, E., Paninski, L., & Hurwitz, C. (2025). Neural Encoding and Decoding at Scale. *arXiv:2504.08201*.
Download Paper