Let’s explore the Search configuration panel of Progress Agentic RAG to learn about the settings that can help us tune our RAG responses.
In the previous article, we walked through the basics of the Progress Agentic RAG no-code dashboard: uploading a document, indexing it and asking natural language questions. We saw how the platform retrieves relevant passages and generates source-cited answers.
In this article, we’ll take a deeper look at the Search configuration panel and explore some of the settings that can impact the quality and responses of our RAG system.
The Search Configuration Panel
When we navigate to the Search section in Progress Agentic RAG, we see a configuration panel on the right side of the screen. This panel is organized into four main sections:
- Search Options: Controls how queries are processed, filtered, and suggested.
- Generative Answer and RAG: Defines how the LLM generates responses using retrieved context.
- Result Display Options: Determines what information is shown.
- User-Intent Routing: Dynamically applies different configurations based on user intent.

Each section contains toggles, dropdowns and input fields that let us customize the RAG pipeline. We’ll walk through just a few options to illustrate how these settings influence the accuracy and usefulness of responses.
Configuration Options
Throughout this article, we’ll keep using the same example Knowledge Box from our previous walkthrough: one that indexes the Progress Software Q3 2025 Earnings Report. This gives us a concrete, real-world document to anchor the configuration examples that follow.
Rephrase Query
Sometimes the way we phrase a question isn’t optimal for semantic search. The Rephrase Query option uses an LLM to rewrite our query into a form that’s more likely to retrieve relevant results.
For example, if we ask “What did the CEO say about AI?”, the system might rephrase it as “What comments did the CEO make regarding AI?”. When testing this configuration and inspecting the logs, we observed exactly this kind of rephrased query.

Note: Query rephrasing only affects semantic search. Keyword search still uses the original query.
Semantic Re-ranking
Initial search results are retrieved based on embedding similarity, but that’s not always the best ordering. Semantic re-ranking takes a larger initial set of results and uses a specialized re-ranking model to reorder them based on contextual relevance to the query.

Semantic re-ranking produces a more accurate, better-ordered list of results. This is particularly useful when dealing with large knowledge bases where the initial retrieval might surface many loosely related documents.
Reciprocal Rank Fusion
Progress Agentic RAG uses both keyword and semantic (embedding-based) search. Because these scoring systems are not directly comparable, Agentic RAG can apply a rank fusion algorithm, Reciprocal Rank Fusion, to merge their ranked lists into a single unified ranking, boosting items that appear in multiple search modes.

A document that ranks highly in both semantic and keyword search is more relevant than one that only appears in one. Reciprocal Rank Fusion intelligently combines these signals, and we can even adjust the weighting to boost semantic results over keyword results (or vice versa) depending on our use case.
Include Neighbouring Paragraphs
RAG works best when the model has sufficient context around a matched paragraph to understand it fully. The Include neighbouring paragraphs option lets us specify how many paragraphs before and after each hit are added to the context (for example, two preceding and two succeeding paragraphs).

When we’re dealing with structured documents like the Q3 2025 earnings report, neighboring paragraphs often include table captions, footnotes or explanatory text. Including them helps the model interpret numbers correctly and answer nuanced questions without losing the surrounding context.
Use Images in Questions
Some models can process both text and images simultaneously. When we enable Use images in questions, the widget lets users attach screenshots or photos alongside their text query so the model can reason over both.

For example, even though it’s not strictly necessary for our CEO question, we could attach a profile picture of the CEO (Yogesh Gupta) and ask “What did the CEO say about AI in this section?”. The model can then use visual context alongside the retrieved text to ground its response.
Display Thumbnails
Visual cues can make it easier to recognize documents we care about. When we enable Display thumbnails, each result includes a small preview image (for example, the first page of a PDF or a snapshot of a webpage).
![]()
For the Q3 2025 earnings report, this might mean we immediately recognize the official investor PDF in a long list of resources, even before reading the title or metadata.
JSON Output
Sometimes we don’t just want a natural-language answer, we want a structured payload we can feed into another system. The JSON output option lets us provide a JSON schema describing the expected answer shape, and the model will try to respond with that structure.

For instance, instead of asking for a free-form explanation of the updated FY 2025 guidance, we could define a schema like:
{
"fiscal_year": "string",
"revenue_guidance_min": "number",
"revenue_guidance_max": "number",
"previous_guidance_note": "string"
}
With JSON output enabled and this schema configured, the response can be parsed programmatically. For example, to plot guidance ranges on a dashboard or compare them against previous quarters.
This kind of structured output is important when we use Agentic RAG from code rather than solely through the no‑code interface, because it lets downstream services consume answers directly rather than scraping free‑form text.
Wrap-up
In this article, we’ve focused on just a small slice of what the Search configuration panel can do. We looked at options such as query rephrasing, semantic re-ranking, Reciprocal Rank Fusion, neighboring paragraphs, using images in questions, displaying thumbnails and JSON output. Even this handful of settings already shows how much control we have over how Progress Agentic RAG retrieves, ranks and presents context from our data resources.
For more details and to get started with Progress Agentic RAG, be sure to check out the following resources: