At Vurvey, we're committed to providing you with the most accurate and relevant information possible. Our AI-powered platform doesn't just give you answers — it shows you why those answers are relevant by grounding them in real data from your surveys and campaigns.
This approach ensures transparency and builds trust, allowing you to see firsthand how our AI reaches its conclusions. This article will walk you through how grounding works and explain the new evidence panel, which gives you even more visibility into what the AI used and how confident you can be in each part of its response.
What's new: We've completely overhauled how Vurvey presents the sources behind AI responses. Instead of a flat list of sources, you'll now see structured evidence with quality indicators, claim-level attribution, and honest signals when full citation mapping isn't available.
What is Grounding, and Why Does It Matter?
Imagine asking a question and receiving an answer without any context or source. You might wonder, "How do they know that?" or "Where did they get that information?" This is where grounding comes in.
Grounding connects an AI's response directly to the data it used to formulate that response. Instead of simply telling you something, Vurvey shows you the evidence — increasing transparency and giving you confidence in the information presented. Think of it as footnotes or citations, but far more intuitive and dynamic.
From Sources to Evidence
The most important change in this update is how we organize and label what the AI used. Previously, all sources appeared in a single flat list. Now, the evidence panel separates them into two clearly distinct sections:
Before
Flat list of all sources
No distinction between cited & consulted
No quality levels
No provenance summary
"Web sources" label even for your datasets
Unsupported claims never flagged
File citations: basic link only
After
Cited Evidence vs. Other Sources Consulted
Inline numbers tied to specific claims
Three quality levels
At-a-glance provenance summary
"Powered by" label accurately reflects source types
"Claims that need review" section
Rich previews: PDF pages, timestamps, images
Cited Evidence — Sources directly tied to specific claims in the response. These have inline citation numbers (superscripts) in the response text so you can see exactly which source supports which statement.
Other Sources Consulted — Sources the AI reviewed but didn't directly quote or paraphrase. They informed the overall direction of the response without mapping to specific claims.
Evidence Quality Levels
Not all cited sources are used the same way. Within the Cited Evidence section, sources are now grouped by how the AI actually used them:
Level | Label in Vurvey | What it Mean |
Exact Quote | "Directly supported by source (quote)" | The AI's response directly matches text from this source. The highest level of directness. |
Paraphrase | "Directly supported by source" | The AI closely paraphrase the source. A direct match, just not verbatim. |
Synthese | "Source informed this section" | The AI drew on this source but combined or rephrased information from multiple inputs. The weakest directness level. |
This distinction helps you quickly assess whether the AI is quoting your data verbatim or synthesizing across multiple sources.
New Signals You'll See in the Evidence Panel
Provenance Summary
At the top of the evidence panel, you'll now see a one-line summary that gives you an at-a-glance sense of how well-supported the response is before you dive into the details:
1 verified claim • 49 informed sections • 25 cited sources
Synthesis Warning
When the AI draws more heavily on synthesis than direct quotes, a contextual warning chip appears. This isn't a red flag — it just tells you the response was built by combining information across sources rather than quoting them directly.
Smart "Powered By" Label
The label below each AI response now accurately tells you what types of sources were used:
Powered by your data — response used only your workspace datasets, campaigns, or documents
Powered by web sources — response used only external web sources
Powered by your data & web sources — the response combined both
Previously, this label could incorrectly show "web sources" even when only your datasets were used. That's now fixed.
Claims That Need Review
When the AI makes a claim that isn't directly backed by any source in your data, it's now flagged explicitly in a "Claims that need review" section at the bottom of the evidence panel. This helps you identify parts of the response that may need fact-checking.
Note: The system automatically filters out structural text (like headings and transition phrases) and null-result statements ("I found no mentions of...") so only substantive unsupported claims are surfaced.
Rich File Citations
When the AI references files you've uploaded to your workspace, it now shows much richer context than a basic link:
📄 PDF Pages
Shows the specific page number with an embedded preview so you can see the exact content referenced.
🎬 Video Segments
Shows the timestamp range the AI referenced (e.g., "0:30 – 1:45") with a link to that moment in the video.
🎙️ Audio Segments
Shows the timestamp range with an audio player link for the referenced portion.
🖼️ Images
Shows a thumbnail preview of the exact image the AI referenced.
What Happens When Full Citation Mapping Isn't Available
Sometimes the AI can't perform full claim-level citation mapping — for example, when a response structure doesn't lend itself to per-claim attribution. Previously, this meant citations would be silently dropped. Now, Vurvey handles it honestly:
The full list of sources that informed the response is still shown
A clear warning appears: "Source list only. Claim-level citation mapping was unavailable for this response."
This is more transparent and useful than showing nothing at all.
Vurvey's 3-Step Approach to Grounding
The evidence you see in the panel is the result of a three-step process happening behind the scenes:
Context Relevancy: Finding the Right Information
Vurvey analyzes your question to identify key terms and concepts, then searches through your datasets and campaigns to find the most relevant information. Think of it as a highly efficient librarian working through a vast library.
Context Precision: Ranking the Best Evidence
Not all data is equally relevant. Vurvey scores each source based on how closely it matches your query, so the AI is always working with the most pertinent information when it constructs a response.
Answer Relevancy: Connecting the Dots
The AI crafts a response drawing directly from the highest-ranked sources. Grounding snippets are then surfaced alongside the response so you can see exactly where each part of the answer came from — and how directly.
What Hasn't Changed
This update is entirely about making evidence more visible and trustworthy. Everything about how you interact with Vurvey remains the same — how you send messages, attach datasets, interact with agents, and view inline citation numbers in response text. Relevance score bars and source links all work as before.
