返回论文列表
Paper Detail
BiPreManip: Learning Affordance-Based Bimanual Preparatory Manipulation through Anticipatory CollaborationBiPreManip:通过预测性协作学习基于Affordance的双臂预备操作
cs.CV自动驾驶CVTransformer热门获取目标检测
BiPreManip Authors
2026年03月23日
arXiv: 2603.21679v1

作者人数

1

标签数量

5

内容状态

含 PDF

原文 + 中文

同页查看标题和摘要的双语信息

PDF 预览

直接在详情页阅读或下载论文全文

深度分析

继续下钻到 AI 生成的结构化解读

摘要 / Abstract

This paper addresses bimanual robot manipulation tasks where coordinated actions between two robotic arms are required for complex object interactions. The work introduces a Collaborative Preparatory Manipulation framework that enables robots to perform sequential preparatory actions - such as pushing objects to accessible positions or lifting items - to facilitate subsequent goal-directed manipulations by the other arm. The proposed visual affordance-based approach first anticipates the final task objective and then generates appropriate preparatory manipulations, requiring deep understanding of object geometry, spatial relationships, and semantic properties. By learning from demonstrations and employing vision-based affordance recognition, the framework achieves effective bimanual coordination for tasks involving objects that are difficult to grasp directly.

本文针对需要双臂协调动作完成复杂物体交互的双臂机器人操作任务进行了研究。我们提出了协作预备操作框架,使机器人能够执行序列化的预备动作(如将物体推至可及位置或抬起物品),以协助另一臂完成后续目标导向操作。该方法采用视觉Affordance识别方法,首先预测最终任务目标,然后生成适当的预备操作,实现了对物体几何形状、空间关系和语义属性的深度理解。通过从演示中学习并采用基于视觉的Affordance识别,该框架在处理难以直接抓取的物体任务中实现了有效的双臂协调。

PDF 预览
1
在 arXiv 查看下载 PDF

分类 / Categories

cs.CVcs.RO

深度分析

AI 深度理解论文内容,生成具有洞见性的总结