作者人数
标签数量
内容状态
原文 + 中文
同页查看标题和摘要的双语信息
PDF 预览
直接在详情页阅读或下载论文全文
深度分析
继续下钻到 AI 生成的结构化解读
摘要 / Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a powerful paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs). However, vanilla RLVR suffers from inefficient exploration, particularly when confronting hard samples that yield near-zero success rates. In such scenarios, the reliance on sparse outcome rewards typically results in zero-advantage estimates, effectively starving the model of supervision signals despite the high informational value of these instances. To address this, we propose P^2O, a novel framework that synergizes Prompt Optimization with Policy Optimization. P^2O identifies hard samples during training iterations and leverages the GeneticPareto (GEPA) prompt optimization algorithm to evolve prompt templates that guide the model toward discovering successful trajectories.
具有可验证奖励的强化学习(RLVR)已成为增强大型语言模型(LLMs)推理能力的有效范式。然而,标准的RLVR存在探索效率低下的问题,特别是在面对成功率接近零的困难样本时。在这种情况下,对稀疏结果奖励的依赖通常导致零优势估计,使模型缺乏监督信号,尽管这些样本具有很高的信息价值。为解决这一问题,我们提出了P^2O,一个将提示优化与策略优化相结合的创新框架。该框架在训练迭代中识别困难样本,并利用GeneticPareto(GEPA)提示优化算法进化提示模板,引导模型发现成功的轨迹。
分类 / Categories
深度分析
AI 深度理解论文内容,生成具有洞见性的总结