Output Quality Evaluator (LLM-as-a-Judge)

v1
Model: GPT-4 Level: Intermediate πŸ‘ 0 πŸ“‹ 0
evaluationLLM-as-a-judgequality
Prompt 19 words

Evaluate the following output [output text] for [criteria such as accuracy, clarity, relevance], and provide a score and reasoning.

Try in ChatGPT Try in Claude

Why It Works

Provides structured evaluation metrics, improving feedback quality.

Example Output

Score: 8/10 β€” Accurate but lacks detail in explaining cause-effect relationships.

Suggest an Improvement

Log in to suggest improvements.

Related Prompts