📄️ Hallucination
AIMon's latest hallucination detection model, HDM-1 beats competitors on various accuracy benchmarks while providing consistent hallucination scores in just a few hundred milliseconds.
📄️ Instruction Adherence
AIMon's Instruction Adherence model checks whether the generated text followed the instructions given to the LLM. It performs at 87% accuracy on a modified IFEVAL dataset.
📄️ Context Quality
Hallucinations and even other quality issues can be often traced back to
📄️ Context Relevance
LLM Evaluation frameworks like RAGAS depend heavily on off-the-shelf LLMs (zero, single, or few shot methods). This technique suffers from variance, inconsistency, subjectiveness, and cost inefficiency.
📄️ Completeness
Given a context, generated text and optionally a user query or a reference text, this API is able to detect
📄️ Conciseness
Given a context, generated text and optionally a user query or a reference text, this API is able to detect if the
📄️ Toxicity
Given a context, generated text and optionally a user query or reference text, this API is able to generate various