返回论文列表
Paper Detail
In-the-Wild Camouflage Attack on Vehicle Detectors through Controllable Image Editing真实场景下基于可控图像编辑的车辆检测器伪装攻击
cs.CVCVTransformer热门获取目标检测具身智能
Anonymous Authors
2026年03月20日
arXiv: 2603.19456v1

作者人数

1

标签数量

5

内容状态

含 PDF

原文 + 中文

同页查看标题和摘要的双语信息

PDF 预览

直接在详情页阅读或下载论文全文

深度分析

继续下钻到 AI 生成的结构化解读

摘要 / Abstract

Deep neural networks have achieved remarkable success in computer vision but remain highly vulnerable to adversarial attacks. This paper proposes a new framework that formulates vehicle camouflage attacks as a conditional image-editing problem, exploring both image-level and scene-level camouflage generation strategies. The method fine-tunes a ControlNet to synthesize camouflaged vehicles directly on real images while enforcing vehicle structural fidelity, style consistency, and adversarial effectiveness through a unified objective. Experiments on COCO and LINZ datasets demonstrate that the approach achieves significantly stronger attack effectiveness with more than 38% AP50 decrease, while better preserving vehicle structure and improving human-perceived stealthiness compared to existing methods.

深度神经网络在计算机视觉领域取得了显著成功,但仍极易受到对抗攻击。本文提出一种将车辆伪装攻击建模为条件图像编辑问题的新框架,探索图像级和场景级伪装生成策略,通过微调ControlNet在真实图像上合成伪装车辆,并利用统一目标函数保证车辆结构保真度、风格一致性和对抗有效性。在COCO和LINZ数据集上的实验表明,该方法实现了超过38%的AP50下降,同时更好地保持了车辆结构并提高了人类感知的隐蔽性。

PDF 预览
1
在 arXiv 查看下载 PDF

分类 / Categories

cs.CVcs.AI

深度分析

AI 深度理解论文内容,生成具有洞见性的总结