作者人数
标签数量
内容状态
原文 + 中文
同页查看标题和摘要的双语信息
PDF 预览
直接在详情页阅读或下载论文全文
深度分析
继续下钻到 AI 生成的结构化解读
摘要 / Abstract
Large language models enable increasingly expressive agent-based simulations, but pose methodological challenges regarding behavioral validity. This paper evaluates LLM-driven simulation credibility through a social media test case examining information engagement. Using a Weibo-like environment, the study systematically manipulates information load and descriptive norms while allowing popularity cues to evolve endogenously. The research tests whether simulated user behavior responds systematically to theoretical constructs rather than producing merely plausible outputs. Findings indicate that engagement responds systematically to information load and descriptive norms, with sensitivity to popularity cues varying across contexts. The paper discusses methodological implications for simulation-based communication research, particularly for multi-condition experimental designs involving LLM-driven agents.
分类 / Categories
深度分析
AI 深度理解论文内容,生成具有洞见性的总结