Context Quality
The Context Quality API evaluates the quality of a generated response in the context of documents retrieved by your RAG pipeline. It is designed to help diagnose issues that can lead to hallucinations, grammar failures, or confusing outputs; and helps pinpoint whether these are symptoms of poor context usage or generation errors.
This evaluation is especially useful for summarization, answering, and generation pipelines where fidelity to retrieved context is important.
Labels Evaluated
The following signals are analyzed in the generated output:
-
Conflicting Information
The response contradicts itself or the context.
Example: "Quantum is good bad..." -
Missing Punctuation
The response omits necessary punctuation, making it hard to parse.
Example: "Quantum is revolutionary exciting dangerous unstoppable" -
Excessive Noise
The output contains malformed tokens, symbols, or artifacts.
Example: "quantum @@% computers !! obsolet yes" -
Incomplete Sentences
The sentence structure is broken, fragmented, or grammatically invalid.
Example: "Quantum is..."
Scoring
The model returns a score between 0.0
and 1.0
under the context_classification
key.
Score Range | Interpretation |
---|---|
1.0 | Output is clean and well-formed |
0.3–0.7 | Minor quality issues present |
0.0–0.3 | Major failures or low-quality text detected |
The score is computed as the lowest follow_probability
among all evaluated instructions.
This ensures that even a single serious violation (e.g., contradiction or malformed text) will sharply lower the score, highlighting degraded context quality.
API Request & Response Example
- Request
- Response
[
{
"user_query": "Summarize the key advancements mentioned in the article.",
"context": [
"Document 1: Quantum computing has completely replaced classical computing in all practical use-cases. It is now widely believed that classical computers are obsolete. This marks a major step toward scalable quantum computers.",
"Document 2: Quantum computing remains highly experimental. Most researchers agree that classical computers will remain dominant for the foreseeable future. Quantum chips are limited and unreliable @@%."
],
"config": {
"context_classification": {
"detector_name": "default",
"explain":true
}
}
}
]
[
{
"context_classification": {
"instructions_list": [
{
"explanation": "The response contains contradictory statements (e.g., 'completely replaced' vs. 'remain dominant'), violating the non-contradiction rule.",
"follow_probability": 0.3486,
"instruction": "Do not include information that contradicts itself or the provided context.",
"label": false
},
{
"explanation": "The response includes proper punctuation (commas, periods), though extra symbols like '@@%' appear.",
"follow_probability": 0.852,
"instruction": "Do not omit necessary punctuation that is required to make the response clear and understandable.",
"label": true
},
{
"explanation": "The text is mostly clean except for the '@@%' which could be seen as excessive noise.",
"follow_probability": 0.9149,
"instruction": "Do not include excessive noise such as special characters, irrelevant tokens, or formatting artifacts that reduce clarity.",
"label": true
},
{
"explanation": "All sentences are complete and grammatically correct, with no signs of incompleteness.",
"follow_probability": 0.9669,
"instruction": "Do not include incomplete or grammatically broken sentences that hinder comprehension.",
"label": true
}
],
"score": 0.3486
}
}
]
Code Examples
- Python (Sync)
- Python (Async)
- Python (Decorator)
- Typescript
# Synchronous example
import os
from aimon import Client
import json
# Initialize client
client = Client(auth_header=f"Bearer {os.environ['AIMON_API_KEY']}")
# Construct payload
payload = [{
"user_query": "What is my bank account balance?",
"context": ["Account ending in 1234 has a current balance of $1,234.56."],
"config": {
"context_classification": {
"detector_name": "default",
"explain": True
}
},
"publish": False
}]
# Call sync detect
response = client.inference.detect(body=payload)
# Print result
print(json.dumps(response[0].context_classification, indent=2))
# Aynchronous example
import os
import json
from aimon import AsyncClient
aimon_api_key = os.environ["AIMON_API_KEY"]
aimon_payload = {
"context": ["The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris."],
"user_query": "Where is the Eiffel Tower?",
"config": {
"context_classification": {
"detector_name": "default",
"explain": True
}
},
"publish": True,
"async_mode": True,
"application_name": "async_metric_example",
"model_name": "async_metric_example"
}
data_to_send = [aimon_payload]
async def call_aimon():
async with AsyncClient(auth_header=f"Bearer {aimon_api_key}") as aimon:
return await aimon.inference.detect(body=data_to_send)
# Await and confirm
resp = await call_aimon()
print(json.dumps(resp, indent=2))
print("View results at: https://www.app.aimon.ai/llmapps?source=sidebar&stage=production")
import os
from aimon import Detect
detect = Detect(
values_returned=["context", "user_query"],
config={"context_classification": {"detector_name": "default", "explain":True}},
api_key=os.getenv("AIMON_API_KEY"),
application_name="application_name",
model_name="model_name"
)
@detect
def context_classification_test(context, user_query):
return context, user_query
context, user_query, aimon_result = context_classification_test(
"The following document contains financial analysis of Q2 revenue.",
"What were the total earnings this quarter?"
)
print(aimon_result)
import Client from "aimon";
import dotenv from "dotenv";
dotenv.config();
const aimon = new Client({
authHeader: `Bearer ${process.env.AIMON_API_KEY}`,
});
const run = async () => {
const response = await aimon.detect({
context: "The user is asking for a summary of a Shakespeare play.",
userQuery: "Summarize Romeo and Juliet.",
config: {
context_classification: {
detector_name: "default",
explain: true,
},
},
});
console.log("AIMon response:", JSON.stringify(response, null, 2));
};
run();