Output Quality Evaluator (LLM-as-a-Judge)

v1
Model: GPT-4 Level: Intermediate πŸ‘ 0 πŸ“‹ 0
evaluationLLM-as-a-judgequality
Prompt 19 words

Evaluate the following output [output text] for [criteria such as accuracy, clarity, relevance], and provide a score and reasoning.

Useful prompt but the real issue is bigger? That usually means the workflow or team mechanism needs attention, not just the wording.

Why It Works

Provides structured evaluation metrics, improving feedback quality.

Example Output

Score: 8/10 β€” Accurate but lacks detail in explaining cause-effect relationships.

Suggest an Improvement

Log in to suggest improvements.

Related Prompts