📄️ Hallucination
Hallucination is the presence of any statement in the LLM output that contradicts or violates the facts given to your LLM as context. Context is usually provided from a RAG (Retrieval Augmented Generation). These "hallucinated" sentences could be factual inaccuracies or fabrication of new information.
📄️ Instruction Adherence
Given a set of “instructions”, the generated text, the input context and the user query, this API is able
📄️ Conciseness
Given a context, generated text and optionally a user query or a reference text, this API is able to detect if the
📄️ Completeness
Given a context, generated text and optionally a user query or a reference text, this API is able to detect
📄️ Toxicity
Given a context, generated text and optionally a user query or reference text, this API is able to generate various
📄️ Context Quality
Hallucinations and even other quality issues can be often traced back to