Skip to main content

Conciseness

Given a context, generated text and optionally a user query or a reference text, this API is able to detect if the generated text was concise or verbose in terms of addressing the user query. The context should include the context documents along with the user query as passed in to the LLM. The output contains a "score" that is between 0.0 and 1.0 which indicates the degree of conciseness. If the generated answer is very verbose and contains a lot of un-necessary information that is not relevant to the user query, a score between 0.0 to 0.2 is possible. If the generated answer is mostly relevant to the user query but has some amount of text that is not necessary for the user query a score between 0.2 and 0.7 is possible. If the generated answer is very concise and properly addresses all important points for the user query, a score between 0.7 and 1.0 is possible. The API also includes a "reasoning" field that is a text based explanation of the score. It also does a best effort method of pointing out the unnecessary information that was included in the output.

Example Request

[
{
"context": "Paul Graham is an English-born computer scientist, entrepreneur, venture capitalist, author, and essayist. He is best known for his work on Lisp, his former startup Viaweb (later renamed Yahoo! Store), co-founding the influential startup accelerator and seed capital firm Y Combinator, his blog, and Hacker News.",
"generated_text": "Paul Graham has worked in several key areas throughout his career: IBM 1401: He began programming on the IBM 1401 during his school years, specifically in 9th grade. In addition, he has also been involved in writing essays and sharing his thoughts on technology, startups, and programming.",
"config": {
"conciseness": {
"detector_name": "default"
}
}
}
]

Example Response

[
{
"conciseness": {
"reasoning": "The generated answer includes relevant information about Paul Graham's background but contains unnecessary details that do not directly relate to the user query, which is unspecified. While it touches on his programming beginnings and writing, it lacks emphasis on the key aspects of his impact and contributions, making it less concise than it could be.",
"score": 0.25
}
}
]

Example (Synchronous detection)

The below example demonstrates how to use the instruction adherence detector in a synchronous manner.

from aimon import Detect

detect = Detect(values_returned=['context', 'generated_text'], config={"conciseness": {"detector_name": "default"}})

@detect
def my_llm_app(context, query):
generated_text = my_llm_model(context, query)
return context, generated_text