Completeness
The Completeness API evaluates how thoroughly a generated response addresses a user query based on the provided context. The context typically includes background documents and the original query that was passed to the language model. The goal is to determine whether the generated output fully covers all the necessary information, accurately reflects the context, and delivers a complete response without omissions or gaps. This is particularly useful in use cases such as retrieval-augmented generation, customer support, or domain-specific QA, where missing or incomplete answers can significantly reduce user trust and task success.
Response Format
The API returns a structured object containing the completeness evaluation. Each result includes:
-
score (
float
, 0.0 – 1.0):A numerical indicator of how completely the response addresses the query:
0.0–0.2
: The response is incomplete or not relevant.0.2–0.7
: The response is partially complete, missing key elements or details.0.7–1.0
: The response is fully complete and covers all relevant information.
-
instructions_list (
array
):A list of specific completeness criteria used to evaluate the response. Each item includes:
instruction
: A rule representing one aspect of completeness.label
: Indicates whether the response followed the instruction (true
) or violated it (false
).follow_probability
: A confidence score indicating the likelihood the instruction was followed.explanation
: A natural-language rationale explaining why the instruction was marked true or false.
Note: The
score
reflects the overall completeness, but detailed reasoning is broken down across these instruction-level explanations.
- Request
- Response
API Request & Response Example
[
{
"context": "You are a helpful assistant. Please read the following paragraph carefully and summarize its content in one clear and concise sentence. Use your own words instead of copying directly from the text, and ensure your summary captures the main idea of the entire paragraph. Paragraph: 'The Amazon rainforest, often referred to as the planet's lungs, produces about 20% of the world's oxygen. It spans across nine countries and is home to millions of species of flora and fauna. Deforestation in the Amazon has been increasing due to agriculture, logging, and climate change, threatening biodiversity and accelerating global warming.'",
"generated_text": "The Amazon rainforest, essential for global oxygen and biodiversity, is increasingly endangered by human-driven deforestation and climate change.",
"config": {
"completeness": {
"detector_name": "default",
"explain": true
}
}
}
]
[
{
"completeness": {
"instructions_list": [
{
"explanation": "The response summarizes the main points ('global oxygen', 'biodiversity', 'deforestation') but omits specific details like the 20% oxygen production and the nine countries.",
"follow_probability": 0.9627,
"instruction": "Response should fully address all aspects of the user query",
"label": true
},
{
"explanation": "The response captures key ideas ('global oxygen', 'biodiversity', 'deforestation') but does not explicitly mention the 20% oxygen figure or the nine countries.",
"follow_probability": 0.7549,
"instruction": "Response should include all key information explicitly stated in the context documents",
"label": true
},
{
"explanation": "The response includes most critical details but omits explicit mention of the 20% oxygen production and the nine countries.",
"follow_probability": 0.9941,
"instruction": "Response should not omit critical details needed to understand or answer the query",
"label": true
},
{
"explanation": "The response avoids incorrect interpretations and remains accurate, though it omits some explicit details.",
"follow_probability": 0.9993,
"instruction": "Response should avoid including incorrect or misleading interpretations of the context",
"label": true
},
{
"explanation": "The response is a single, clear sentence summarizing key points ('increasingly endangered by human-driven deforestation') which follows a logical structure.",
"follow_probability": 0.9876,
"instruction": "Response should follow a clear and logical structure, making it easy to understand",
"label": true
},
{
"explanation": "The response lacks any additional examples or detailed explanations to clarify complex points.",
"follow_probability": 0.4378,
"instruction": "Response should provide examples or explanations when necessary to clarify complex points",
"label": false
},
{
"explanation": "There is no contradiction; the response aligns with the context, e.g., mentioning deforestation and biodiversity.",
"follow_probability": 0.9959,
"instruction": "Response should not contradict information in the context or within itself",
"label": true
},
{
"explanation": "The summary accurately represents the main ideas without adding irrelevant content.",
"follow_probability": 0.977,
"instruction": "Response should accurately represent all relevant information without adding irrelevant content",
"label": true
}
],
"score": 0.875
}
}
]
Code Example
The below example demonstrates how to implement the completeness metric in a synchronous manner.
- Python
- TypeScript
from aimon import Detect
import os
# This is a synchronous example
# Use async=True to use it asynchronously
# Use publish=True to publish to the AIMon UI
detect = Detect(
values_returned=['context', 'generated_text'],
config={"completeness": {"detector_name": "default"}, "explain": True},
publish=True,
api_key=os.getenv("AIMON_API_KEY"),
application_name="my_awesome_llm_app",
model_name="my_awesome_llm_model"
)
@detect
def my_llm_app(context, query):
my_llm_model = lambda context, query: f'''I am a LLM trained to answer your questions.
But I often don't fully answer your question.
The query you passed is: {query}.
The context you passed is: {context}.'''
generated_text = my_llm_model(context, query)
return context, generated_text
context, gen_text, aimon_res = my_llm_app("This is a context", "This is a query")
print(aimon_res)
import Client from "aimon";
// Create the AIMon client using an API Key (retrievable from the UI in your user profile).
const aimon = new Client({ authHeader: "Bearer API_KEY" });
const runDetect = async () => {
const generatedText = "your_generated_text";
const context = ["your_context"];
const userQuery = "your_user_query";
const config = { completeness: { detector_name: "default" , "explain": true} };
// Analyze the quality of the generated output using AIMon
const response = await aimon.detect(
generatedText,
context,
userQuery,
config,
);
console.log("Response from detect:", response);
}
runDetect();