返回论文列表
Paper Detail
Revisiting Quantum Code Generation: Where Should Domain Knowledge Live?重新审视量子代码生成:领域知识应置于何处?
cs.LG大语言模型端到端Transformer热门获取
Anonymous Authors
2026年03月24日
arXiv: 2603.22184v1

作者人数

1

标签数量

4

内容状态

含 PDF

原文 + 中文

同页查看标题和摘要的双语信息

PDF 预览

直接在详情页阅读或下载论文全文

深度分析

继续下钻到 AI 生成的结构化解读

摘要 / Abstract

This paper investigates how to effectively incorporate domain knowledge into LLM-based code generation systems for quantum software development. The researchers evaluate various strategies including parameter-specialized fine-tuned models and general-purpose LLMs enhanced with retrieval-augmented generation and agent-based inference mechanisms. Using the Qiskit-HumanEval benchmark, they compare different approaches to quantum code generation with Qiskit frameworks. The study finds that modern general-purpose LLMs with advanced inference techniques consistently outperform specialized fine-tuned baselines, achieving approximately 47% pass@1 performance. These findings suggest that general-purpose models with retrieval and execution feedback mechanisms may be more suitable for evolving software ecosystems compared to domain-specific specialized models.

本文研究了如何有效地将领域知识融入基于LLM的量子软件开发代码生成系统。研究人员评估了多种策略,包括参数专业化微调模型以及结合检索增强生成和基于代理推理机制的大语言模型。基于Qiskit-HumanEval基准的实验表明,采用先进推理技术的大语言模型始终优于专门的微调基线模型,达到了约47%的pass@1性能。这些发现表明,在不断发展的软件生态系统中,具有检索和执行反馈机制的大语言模型可能比领域特定的专门模型更具优势。

PDF 预览
1
在 arXiv 查看下载 PDF

分类 / Categories

cs.LGcs.SE

深度分析

AI 深度理解论文内容,生成具有洞见性的总结