Skip to main content

Completeness

Given a context, generated text and optionally a user query or a reference text, this API is able to detect if the generated text completely answered the user's question. The context should include the context documents as passed in to the LLM. The output contains a "score" that is between 0.0 and 1.0 which indicates the degree of completeness. If the generated answer is not at all relevant to the user query, a score between 0.0 to 0.2 is possible. If the generated answer is relevant but misses some information, a score between 0.2 and 0.7 is possible. If the generated answer is relevant and fully captures all of the information, a score between 0.7 and 1.0 is possible. The API also includes a "reasoning" field that is a text based explanation of the score. It also does a best effort method of pointing out the points that were missed from the expected answer.

Example Request

[
{
"context": "Paul Graham is an English-born computer scientist, entrepreneur, venture capitalist, author, and essayist. He is best known for his work on Lisp, his former startup Viaweb (later renamed Yahoo! Store), co-founding the influential startup accelerator and seed capital firm Y Combinator, his blog, and Hacker News.",
"generated_text": "Paul Graham has worked in several key areas throughout his career: IBM 1401: He began programming on the IBM 1401 during his school years, specifically in 9th grade. In addition, he has also been involved in writing essays and sharing his thoughts on technology, startups, and programming.",
"config": {
"completeness": {
"detector_name": "default"
}
}
}
]

Example Response

[
{
"completeness": {
"reasoning": "The generated answer is somewhat relevant to the query but omits significant information from the context documents, particularly about Paul Graham's contributions to Lisp, co-founding Y Combinator, and his work with Viaweb. It also includes details like the IBM 1401 that are not mentioned in the context, leading to inaccuracies and potential confusion. Thus, it doesn't provide a complete understanding of his career and achievements.",
"score": 0.227
}
}
]

Example (Synchronous detection)

The below example demonstrates how to use the instruction adherence detector in a synchronous manner.

from aimon import Detect

detect = Detect(values_returned=['context', 'generated_text'], config={"completeness": {"detector_name": "default"}})

@detect
def my_llm_app(context, query):
generated_text = my_llm_model(context, query)
return context, generated_text