A retrieval-augmented generation (RAG) evaluation measure assessing how well an answer addresses the user’s question, often compared to a reference or judged by humans/LLMs. It is frequently used alongside context relevance and groundedness.
A retrieval-augmented generation (RAG) evaluation measure assessing how well an answer addresses the user’s question, often compared to a reference or judged by humans/LLMs. It is frequently used alongside context relevance and groundedness.