Publications

You can also find my articles on my Google Scholar profile.

2026

Projection-specific Routing of Odor Information in the Olfactory Cortex

S. Daste*, T. Pham*, M. Seppo, A. Andre, S. Srinivasan, J. Xiao, A. Sattin, C. Nardin, T. Fellin, K. Franks, E. Dyer, A. Fleischmann
Preprint, 2026
Paper
Abstract

Sensory processing in the mammalian cortex relies on extensive feedforward and feedback connections, yet how information is routed along these pathways remains poorly understood. Here, we examined the functional properties of feedback and feedforward neurons in the mouse olfactory (piriform) cortex. We selectively labeled neurons projecting to the olfactory bulb (OB, feedback) or medial prefrontal cortex (mPFC, feedforward) and recorded their activity during passive odor exposure and learning of an odor discrimination task. We found that odor identity and reward associations were encoded by OB-projecting ensembles early during odor exposure, whereas mPFC-projecting neurons encoded this information later, aligned with behavioral responses. Moreover, mPFC-projecting neurons maintained a stable representation of valence across days, while OB-projecting neurons exhibited pronounced plasticity. Together, these findings reveal that odor information is selectively routed through feedforward and feedback pathways and suggest that the functional properties of piriform neurons mirror the computational demands of their downstream targets.

2025

A Scalable Self-Supervised Method for Modeling Human Intracranial Recordings during Natural Behavior

S. Mahato*, J. Xiao*, A. Andre, G. Chau, W. Ma, I. Knight, D. Nguyen, L. Hu, B. Brunton, M. Beauchamp, B. Pesaran, S. Shuvaev, E. Dyer
Foundation Models for the Brain and Body (BrainBody) Workshop, Spotlight, Neural Information Processing Systems (NeurIPS), 2025
Paper
Abstract

Understanding how the brain supports natural behavior is an increasingly central goal in human neuroscience. Recordings from human neurosurgical patients with intracranial EEG electrodes offer direct access to widespread brain electrical activity during a variety of behaviors over extended times. Despite the progress in the field, utilizing these recordings at scale to identify the neural underpinnings of natural human behavior remains difficult due to variability in electrode placement, channel geometry, and behavioral diversity across participants and sessions. To address these challenges, we introduce a self-supervised framework for multi-participant intracranial neural data. We use a Perceiver-based architecture to reconstruct masked channels of neural activity from unmasked channels using learnable embeddings of the channel identity and contextual information, capturing inter-channel dependencies without requiring labels. Finetuning of our self-supervised model has improved the decoding performance on a panel of downstream tasks, highlighting the potential of self-supervised learning to enable general-purpose neural decoding and support scalable integration of naturalistic human brain recordings.

Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting

A. Andre*, G. Roy*, E. Dyer, K. Wang
Evaluating and Applying Recommender Systems with Large Language Models (EARL) Workshop, Oral, ACM Conference on Recommender Systems (RecSys), 2025
Paper
Abstract

Large Language Models (LLMs) are increasingly used for recommendation tasks due to their general-purpose capabilities. While LLMs perform well in rich-context settings, their behavior in cold-start scenarios, where only limited signals such as age, gender, or language are available, raises fairness concerns because they may rely on societal biases encoded during pretraining. We introduce a benchmark specifically designed to evaluate fairness in zero-context recommendation. Our modular pipeline supports configurable recommendation domains and sensitive attributes, enabling systematic and flexible audits of any open-source LLM. Through evaluations of state-of-the-art models (Gemma 3 and Llama 3.2), we uncover consistent biases across recommendation domains (music, movies, and colleges) including gendered and cultural stereotypes. We also reveal a non-linear relationship between model size and fairness, highlighting the need for nuanced analysis.

Neural Encoding and Decoding at Scale

Y. Zhang*, Y. Wang*, M. Azabou, A. Andre, Z. Wang, H. Lyu, The International Brain Laboratory, E. Dyer, L. Paninski, C. Hurwitz
International Conference on Machine Learning (ICML), Spotlight, 2025
Paper
Abstract

Recent work has demonstrated that large-scale, multi-animal models are powerful tools for characterizing the relationship between neural activity and behavior. Current large-scale approaches, however, focus exclusively on either predicting neural activity from behavior (encoding) or predicting behavior from neural activity (decoding), limiting their ability to capture the bidirectional relationship between neural activity and behavior. To bridge this gap, we introduce a multimodal, multi-task model that enables simultaneous Neural Encoding and Decoding at Scale (NEDS). Central to our approach is a novel multi-task-masking strategy, which alternates between neural, behavioral, within-modality, and cross-modality masking. We pretrain our method on the International Brain Laboratory (IBL) repeated site dataset, which includes recordings from 83 animals performing the same visual decision-making task. In comparison to other large-scale models, we demonstrate that NEDS achieves state-of-the-art performance for both encoding and decoding when pretrained on multi-animal data and then fine-tuned on new animals. Surprisingly, NEDS's learned embeddings exhibit emergent properties: even without explicit training, they are highly predictive of the brain regions in each recording. Altogether, our approach is a step towards a foundation model of the brain that enables seamless translation between neural activity and behavior.

* co-first authors

© 2025 Alexandre ANDRE