Output Quality Evaluator (LLM-as-a-Judge)

Output Quality Evaluator (LLM-as-a-Judge)

v1
Model: GPT-4
Difficulty: Intermediate
Likes: 0
Copies: 0
evaluationLLM-as-a-judgequality

Prompt

Evaluate the following output [output text] for [criteria such as accuracy, clarity, relevance], and provide a score and reasoning.

Why It Works

Provides structured evaluation metrics, improving feedback quality.

Example Output

Score: 8/10 โ€” Accurate but lacks detail in explaining cause-effect relationships.

Suggest an improvement

Log in to submit omissions or improvements.