返回论文列表
Paper Detail
CAMA: Exploring Collusive Adversarial Attacks in Cooperative Multi-Agent Reinforcement LearningCAMA:探索合作式多智能体强化学习中的串通式对抗攻击
cs.AICVTransformer热门获取具身智能
CAMA Research Team
2026年03月21日
arXiv: 2603.20390v1

作者人数

1

标签数量

4

内容状态

含 PDF

原文 + 中文

同页查看标题和摘要的双语信息

PDF 预览

直接在详情页阅读或下载论文全文

深度分析

继续下钻到 AI 生成的结构化解读

摘要 / Abstract

Cooperative multi-agent reinforcement learning (c-MARL) has been widely deployed in real-world applications, including social robots, embodied intelligence, and UAV swarms. However, various adversarial attacks continue to threaten c-MARL systems. Existing studies primarily focus on single-adversary perturbation attacks and white-box adversarial attacks that manipulate agents' internal observations or actions. This paper proposes a novel study of collusive adversarial attacks by strategically organizing malicious agents into three collusive attack modes: Collective Malicious Agents, Disguised Malicious Agents, and Spied Malicious Agents. The proposed unified framework CAMA enables policy-level collusive attacks, with attack effectiveness theoretically analyzed from perspectives of disruptiveness and stealthiness.

合作式多智能体强化学习(c-MARL)已广泛应用于社会机器人、具身智能和无人机蜂群等实际场景。然而,各种对抗攻击持续威胁着c-MARL系统。现有研究主要关注单对抗者扰动攻击和白盒对抗攻击,这些攻击主要针对智能体的内部观测或动作进行操纵。本文提出了一种新颖的串通式对抗攻击研究,通过将恶意智能体策略性地组织为三种串通攻击模式:集体恶意智能体、伪装恶意智能体和被监视的恶意智能体。提出的统一框架CAMA实现了策略层面的串通攻击,并从破坏性和隐蔽性两个角度对攻击效果进行了理论分析。

PDF 预览
1
在 arXiv 查看下载 PDF

分类 / Categories

cs.AIcs.MAcs.LG

深度分析

AI 深度理解论文内容,生成具有洞见性的总结