ProRefine: Inference-Time Prompt Refinement with Textual Feedback

Jan 1, 2025·
Deepak Pandita
,
Tharindu Cyril Weerasooriya
,
Ankit Parag Shah
,
Isabelle Diana May-Xin Ng
,
Christopher M. Homan
,
Wei Wei
· 1 min read
Abstract
Agentic workflows, where multiple AI agents collaborate to accomplish complex tasks like reasoning or planning, play a substantial role in many cutting-edge commercial applications, and continue to fascinate researchers across nearly all fields for their potential to accomplish expensive, complex tasks that, until recently, only humans have been trusted to do. These workflows depend critically on the prompts used to provide the roles models play in such workflows. Poorly designed prompts that fail even slightly to guide individual agents can lead to sub-optimal performance that may snowball within a system of agents, limiting their reliability and scalability. To address this important problem of inference-time prompt optimization, we introduce ProRefine, an innovative inference-time optimization method that uses an agentic loop of LLMs to generate and apply textual feedback. ProRefine dynamically refines prompts for multi-step reasoning tasks without additional training or ground truth labels. Evaluated on five benchmark mathematical reasoning datasets, ProRefine significantly surpasses zero-shot Chain-of-Thought baselines by 3 to 37 percentage points. This approach not only boosts accuracy but also allows smaller models to approach the performance of their larger counterparts. This highlights its potential for building more cost-effective and powerful hybrid AI systems, thereby democratizing access to high-performing AI.
Type

When multiple AI agents work together (like one agent planning and another executing), they rely heavily on good prompts. But writing perfect prompts is hard, and even small mistakes can cascade through the system, causing failures. ProRefine solves this by creating a feedback loop: one AI agent evaluates how well the prompt worked and suggests improvements, then another agent refines the prompt based on that feedback—all happening automatically at inference time without any training.

We tested ProRefine on mathematical reasoning tasks and it improved accuracy by 3-37% compared to standard methods. Even better, it lets smaller, cheaper AI models perform nearly as well as much larger, expensive ones. This makes powerful AI more accessible and cost-effective, especially for complex multi-agent systems that are becoming critical in commercial applications.