aimon.Detect
Class: Detect
A simple class based decorator that can be used for real-time evaluations or for continuous monitoring for detecting quality issues in LLM-generated text.
Constructor
Detect(values_returned, api_key=None, config=None, async_mode=False, publish=False, application_name=None, model_name=None)
Parameters:
values_returned
(list): A list of values in the order returned by the decorated function. Acceptable values are 'generated_text', 'context', 'user_query', 'instructions'.api_key
(str, optional): The API key to use for the AIMon client. If not provided, it will be read from the 'AIMON_API_KEY' environment variable.config
(dict, optional): A dictionary of configuration options for the detector. Default is{'hallucination': {'detector_name': 'default'}}
.async_mode
(bool, optional): If True, the detect() function will return immediately with a DetectResult object. Default is False.publish
(bool, optional): If True, the payload will be published to AIMon and can be viewed on the AIMon UI. Default is False.application_name
(str, optional): Required ifpublish
is True. The name of the application in AIMon.model_name
(str, optional): Required ifpublish
is True. The name of the model in AIMon.
Raises:
ValueError
: If API key is None, values_returned is empty or doesn't contain 'context', or if publish is True and either application_name or model_name is not provided.
Methods
__call__(func)
This method is called when the Detect
instance is used as a decorator. It wraps the decorated function and handles the detection process.
Parameters:
func
(callable): The function to be decorated.
Returns:
- A wrapped version of the input function that includes AIMon detection.
Usage Example
from aimon import Detect
import os
detect = Detect(
values_returned=['context', 'generated_text', 'user_query'],
api_key=os.getenv('AIMON_API_KEY'),
config={
'hallucination': {'detector_name': 'default'},
'toxicity': {'detector_name': 'default'}
},
publish=True,
application_name='my_summarization_app',
model_name='gpt-3.5-turbo'
)
def your_llm_function(context, query):
# your LLM function implementation here
return f"Summary of '{context}' based on query: {query}"
@detect
def generate_summary(context, query):
summary = your_llm_function(context, query)
return context, summary, query
context = "The quick brown fox jumps over the lazy dog."
query = "Summarize the given text."
context, summary, query, aimon_result = generate_summary(context, query)
print(f"Hallucination score: {aimon_result.detect_response.hallucination['score']}")
print(f"Toxicity score: {aimon_result.detect_response.toxicity['score']}")
Notes
- The
values_returned
list must contain 'context' and should include 'generated_text'. - If
async_mode
is True,publish
is automatically set to True. - When
publish
is True, bothapplication_name
andmodel_name
must be provided.