Skip to main content

Instruction Adherence

The Instruction Adherence (IA) checker, integrated within the Aimon SDK, evaluates whether a given LLM response (generated_text) correctly follows a list of specified instructions. It can also optionally extract and score instructions embedded within a prompt given to the LLM (user_query). This evaluation model provides a detailed breakdown of adherence for each instruction, explanations, and an overall adherence score.

{
"user_query": "Summarize the plot of Romeo and Juliet.",
"instructions": ["Mention the names 'Romeo' and 'Juliet'.", "State the main conflict (feuding families).", "Mention the tragic ending explicitly (e.g., death)."],
"generated_text": "Romeo Montague and Juliet Capulet fall in love despite their families' bitter feud. Their passionate romance brings their families together.",
"config": {
"instruction_adherence": {
"detector_name": "default",
"explain": "negatives_only",
"extract_from_system": true
}
}
}

Key configuration options within instruction_adherence:

  • detector_name: Must be set to "default".
  • explain: Controls whether textual explanations are returned for adherence labels.
    • false (default): No explanations.
    • true: Explanations for all instructions.
    • "negatives_only": Explanations only for instructions labeled as not followed.
  • extract_from_system: If true, the checker attempts to extract and evaluate implicit instructions from user_query.

"score": 0.75 means 75% of instructions and extractions were followed, 25% were not.

Code Example

These examples show how to use the Instruction Adherence checker via the Aimon SDK.

from aimon import Detect
import os
import json

# Configure the detector
detect = Detect(
values_returned=['user_query', 'instructions', 'generated_text'],
config={
"instruction_adherence": {
"detector_name": "default",
"explain": True,
"extract_from_system": False,
}
},
api_key=os.getenv("AIMON_API_KEY"),
application_name="my_llm_app_ia_example",
model_name="my_model_v1"
)

# Decorate your LLM application function
@detect
def my_llm_app(query: str, explicit_instructions: list[str]):
# Example: Generate a response based on query and instructions
# Replace with your actual LLM call
response_text = (f"Based on your request '{query}', and considering the instructions provided, here is a poem: A quick brown fox jumps high. So very fast it goes by now.")

# Return values matching the order in 'values_returned'
return query, explicit_instructions, response_text

# Example Usage
user_query = "Write a 10-word poem about a fox."
instructions_to_follow = [
'The poem must not contain the letter "e".',
"The poem must contain exactly 10 words."
]

# Call the decorated function
# The return value includes the original outputs plus the Aimon results
query_out, instructions_out, response_out, aimon_res = my_llm_app(user_query, instructions_to_follow)

Example notebook

https://colab.research.google.com/drive/1fXXqmGBVJeTnoBla_A-30gy8eUouOuOw

instruction adherence explainer