返回论文列表
Paper Detail
TREX: Trajectory Explanations for Multi-Objective Reinforcement LearningTREX:面向多目标强化学习的轨迹解释框架
cs.AI大语言模型端到端Transformer热门获取多模态
Multiple Authors
2026年03月23日
arXiv: 2603.21988v1

作者人数

1

标签数量

5

内容状态

含 PDF

原文 + 中文

同页查看标题和摘要的双语信息

PDF 预览

直接在详情页阅读或下载论文全文

深度分析

继续下钻到 AI 生成的结构化解读

摘要 / Abstract

This paper presents TREX, a Trajectory-based Explainability framework designed for Multi-Objective Reinforcement Learning (MORL). The work addresses the limitation that traditional Explainable Reinforcement Learning (XRL) methods are typically tailored for single scalar rewards and fail to provide explanations when agents optimize multiple conflicting objectives simultaneously. The proposed approach enables agents to explicitly reason about trade-offs between different objectives and generates interpretable explanations for the decision-making process behind objective trade-offs. By focusing on trajectory-level explanations, TREX provides insights into how agents navigate decision spaces when balancing competing objectives in complex real-world scenarios.

本文提出了TREX,一个面向多目标强化学习(MORL)的基于轨迹的可解释性框架。该方法针对传统可解释强化学习(XRL)方法仅针对单一标量奖励设计、无法在智能体同时优化多个冲突目标时提供解释的局限性,使智能体能够明确推理不同目标之间的权衡,并为目标权衡背后的决策过程生成可解释的解释。通过聚焦于轨迹层面的解释,TREX揭示了智能体在复杂现实场景中平衡竞争目标时如何导航决策空间。

PDF 预览
1
在 arXiv 查看下载 PDF

分类 / Categories

cs.AIcs.LG

深度分析

AI 深度理解论文内容,生成具有洞见性的总结