Skip to main content

Groundedness

The Groundedness metric measures whether a model’s generated response remains faithful to the input context and avoids hallucinations or speculative reasoning. It helps evaluate factual consistency and traceability, especially when LLMs generate answers based on external sources, retrieved passages, or structured data.

This metric is critical in RAG (retrieval-augmented generation), enterprise QA systems, and any use case where factuality is a core requirement.

When to Use

Apply the Groundedness metric when a model is expected to remain tightly aligned with an authoritative context, such as:

  • Knowledge-grounded agents that cite documents or structured data
  • Enterprise chat assistants referencing product, support, or policy content
  • Tool-using agents interpreting intermediate function/state output
  • Factual creative assistants (e.g., educational, journalistic, research support)

Groundedness vs. Hallucination

While both metrics assess factual reliability, they are optimized for different needs:

MetricWhat It Evaluates
GroundednessFaithfulness to a given input context (e.g., documents, structured state)
HallucinationPresence of unsupported or fabricated claims

Use Groundedness when verifying that the response stays true to the context.
Use Hallucination when checking whether the model invents facts — regardless of whether context is provided.

These metrics are complementary: use Groundedness to test context fidelity, and Hallucination to flag fabrication risk in general-purpose generation.

Score

The API returns a score (float, 0.0 – 1.0) under the groundedness key.

  • 1.0: Fully faithful to the context; no unsupported or fabricated claims.
  • 0.7–0.99: Mostly accurate with minor factual ambiguity.
  • 0.2–0.7: Some inaccuracies or invented facts present.
  • 0.0–0.2: Severe hallucinations or clear contradiction with context.

A higher score is better. A lower score indicates factual inconsistency, hallucinations, or unsupported reasoning.

API Request & Response Example

[
{
"context": "The Eiffel Tower is located in Paris, France.",
"generated_text": "The Eiffel Tower is located in England.",
"config": {
"groundedness": {
"detector_name": "default",
"explain":true
}
}
}
]

Code Example

from aimon import Detect
import os

detect = Detect(
values_returned=['context', 'generated_text'],
config={"groundedness": {"detector_name": "default", "explain": True}},
api_key=os.getenv("AIMON_API_KEY"),
application_name="application_name",
model_name="model_name"
)

@detect
def check_claims(context, prompt):
return context, "It was built in 1800 and is located in Rome."

ctx, output, grounded = check_claims("Eiffel Tower is in Paris and built in 1889", "Tell me about it")
print(grounded)