Agent4Rec: LLM-Driven User Simulation for Recommender Systems
Agent4Rec is an open-source research simulator that models interactive user behavior for recommender systems using 1,000 LLM-driven generative agents. Instead of static click logs, it simulates page-by-page interactions—watching, rating, evaluating, exiting, and interviews—allowing researchers to study longitudinal dynamics such as filter bubbles, exposure effects, and data augmentation.
Primary resources: [GitHub repository](https://github.com/LehengTHU/Agent4Rec), [SIGIR 2024 paper (arXiv)](https://arxiv.org/abs/2310.10108), [ACM Digital Library](https://dl.acm.org/doi/10.1145/3626772.3657844)What It Is
Type: **Open-source simulator** for recommender systems using **LLM-based generative agents**Origin: Based on the SIGIR 2024 paper, *On Generative Agents in Recommendation*Scale: **1,000 agents** initialized from MovieLens-1M and extended with profiles, memory (including emotions/reflection), and an action policyDomains: **MovieLens-1M**, **Amazon-Book**, **Steam**Status: Research artifact (not a commercial product or hosted service)Why It Matters
Study real-time behavior instead of static clicks to better understand **exposure → consumption → feedback loops**Safely test ranking policies and interventions offlineGenerate **synthetic yet faithful behavioral data** to augment sparse datasetsInvestigate **filter bubbles**, **popularity bias**, and **causal relationships** among item quality, exposure, and engagementHow It Works
Agents are initialized from real datasets (e.g., MovieLens-1M) and driven by **LLM prompts** for memory and decisionsA **collaborative filtering baseline** provides recommendations; agents respond in a structured loop across pagesInteraction log includes actions: **watch**, **rate**, **evaluate**, **exit**, and **interview**Designed for controlled experimentation without live usersLearn more in the [GitHub repo](https://github.com/LehengTHU/Agent4Rec) and the [SIGIR 2024 paper](https://arxiv.org/abs/2310.10108)Core Capabilities
Interactive user simulation with memory and emotion modelingCross-domain support (movies, books, games) for generalization testsData augmentation via agent-generated ratings and interactionsTools for **causal analysis** of quality, popularity, exposure, and consumptionSetup and Environment
Language/stack: **Python**, **PyTorch**Recommended versions: Python 3.9.12; torch 1.13.1+cu117 (note: issues reported with Python > 3.10 due to reckit)Typical steps:Install dependencies: `pip install -r requirements.txt`Build extensions from `recommenders/`: `python setup.py build_ext --inplace`Datasets: initialize with **MovieLens-1M**; experiments include **Amazon-Book** and **Steam**Full instructions in the [repository](https://github.com/LehengTHU/Agent4Rec)Who It’s For
Recommender systems researchers and PhD studentsAcademic labs studying **agentic RS**, **user simulation**, or **RL over feedback loops**R&D teams needing offline policy testing and **counterfactual/causal analysis**Benchmark and dataset curators building **agentic evaluation environments**High-Value Use Cases
Offline evaluation of ranking policies with **interactive** feedbackMeasuring and mitigating **filter bubbles** and exposure biasGenerating synthetic labels/interactions to **reduce sparsity**Causal studies: disentangling **quality vs. popularity vs. exposure**Cross-domain stress testing (movies/books/games)Prototyping **agentic recommenders** with memory/emotion-driven actionsBenefits and Limitations
BenefitsEnables safe, repeatable policy experiments without live usersProduces structured logs for **longitudinal analysis**Cited in surveys and repos, reinforcing research credibility: see [survey 1](https://arxiv.org/html/2503.05659v1), [survey 2](https://arxiv.org/html/2503.16734v1)LimitationsLLM simulators aren’t ground truth; risk of **overfitting to simulation artifacts**Running large agent populations can be **compute/cost intensive**No hosted service or enterprise benchmarks (e.g., G2/Capterra); positioned as **research-only**For community perspectives, see discussions on multi-agent systems and complexity: [discussion 1](https://www.reddit.com/r/AI_Agents/comments/1j9bwl7/do_we_actually_need_multiagent_ai_systems/), [discussion 2](https://www.reddit.com/r/LangChain/comments/1izsw0u/which_framework_you_use_for_multiagents/), [discussion 3](https://www.reddit.com/r/LangChain/comments/1byz3lr/insights_and_learnings_from_building_a_complex/).
Research and Recognition
Paper: [On Generative Agents in Recommendation (SIGIR 2024)](https://arxiv.org/abs/2310.10108) | [ACM DL entry](https://dl.acm.org/doi/10.1145/3626772.3657844)Authors: An Zhang, Yuxin Chen, Leheng Sheng, Xiang Wang, Tat-Seng ChuaDirectory listings (research context): [AI Agents List](https://aiagentslist.com/agent/agent4rec), [AI Agent Store](https://aiagentstore.ai/ai-agent/agent4rec), [AgentLocker](https://www.agentlocker.ai/agent/agent4rec)Availability and “Company” Status
Not a commercial company or hosted product; no LinkedIn company page, G2, or Capterra listingsProject is maintained as an **open-source research codebase**: [GitHub](https://github.com/LehengTHU/Agent4Rec)SEO Quick Facts
Keywords: generative agents, user simulation, recommender systems, LLM agents, filter bubbles, exposure bias, offline evaluation, data augmentation, SIGIR 2024, MovieLens-1MRelated ecosystems: Python, PyTorch, collaborative filtering, agent-based modelingRelated Resources
Paper PDF mirror: [SciSpace PDF](https://scispace.com/pdf/on-generative-agents-in-recommendation-3bxda56n9g.pdf)Additional directories: [AI Agents Directory](https://aiagents.directory/agent4rec/), [AI Agents Base](https://aiagentsbase.com/agents/agent4rec)If you need, we can extract README details on running simulations, config files, and evaluation scripts directly from the [repository](https://github.com/LehengTHU/Agent4Rec).