作者人数
标签数量
内容状态
原文 + 中文
同页查看标题和摘要的双语信息
PDF 预览
直接在详情页阅读或下载论文全文
深度分析
继续下钻到 AI 生成的结构化解读
摘要 / Abstract
This paper presents an interpretable object detection framework using Kolmogorov-Arnold networks to enhance trustworthiness in autonomous vehicle perception systems. The approach addresses the critical limitation of limited transparency in confidence scores during visually degraded or ambiguous driving scenarios. A Kolmogorov-Arnold network serves as an interpretable post-hoc surrogate model for YOLOv10 detections, utilizing seven geometric and semantic features to assess detection reliability. The additive spline-based architecture enables direct visualization of feature contributions, revealing when confidence scores are well-supported versus unreliable. Experimental validation on COCO dataset and University of Bath campus images demonstrates accurate trustworthiness estimation for autonomous driving perception.
本文提出了一种基于Kolmogorov-Arnold网络的可解释目标检测框架,用于增强自动驾驶汽车感知系统的可信度。该方法解决了视觉退化或模糊驾驶场景中置信度分数透明度不足的关键问题。Kolmogorov-Arnold网络作为YOLOv10检测的可解释事后代理模型,利用七个几何和语义特征评估检测可靠性。COCO数据集和巴斯大学校园图像的实验验证表明,该方法能够准确估计自动驾驶感知系统的可信度。
分类 / Categories
深度分析
AI 深度理解论文内容,生成具有洞见性的总结