作者人数
标签数量
内容状态
原文 + 中文
同页查看标题和摘要的双语信息
PDF 预览
直接在详情页阅读或下载论文全文
深度分析
继续下钻到 AI 生成的结构化解读
摘要 / Abstract
Large language model (LLM)-based agents rely on memory mechanisms to reuse knowledge from past problem-solving experiences. Existing approaches typically construct memory in a per-agent manner, tightly coupling stored knowledge to a single model's reasoning style. In modern deployments with heterogeneous agents, a natural question arises: can a single memory system be shared across different models? We found that naively transferring memory between agents often degrades performance, as such memory entangles task-relevant knowledge with agent-specific biases. To address this challenge, we propose MemCollab, a collaborative memory framework that constructs agent-agnostic memory by contrasting reasoning trajectories generated by different agents on the same task.
基于大型语言模型(LLM)的智能体依赖记忆机制来重用过往问题解决经验中的知识。现有方法通常以单智能体方式构建记忆,将存储的知识与单一模型的推理风格紧密耦合。在异构智能体的现代部署中,一个自然的问题是:单一记忆系统能否在不同模型间共享?我们发现,智能体间直接转移记忆往往会降低性能,因为这种记忆将任务相关知识与智能体特定偏差纠缠在一起。为解决这一挑战,我们提出MemCollab,一种协作记忆框架,通过对比同一任务上不同智能体生成的推理轨迹来构建智能体无关的记忆。
分类 / Categories
深度分析
AI 深度理解论文内容,生成具有洞见性的总结