You want to improve a model by understanding where it fails hardest.
Error Analysis AI Prompt
This prompt dives into the model's most damaging mistakes to uncover systematic failure modes. It is useful when overall metrics look acceptable but users still complain or critical edge cases remain unresolved. Clustering the worst errors can reveal missing features, bad data, or segment-specific model gaps.
Conduct a deep error analysis on this model's worst predictions. 1. Identify the 50 most confidently wrong predictions (highest predicted probability for the wrong class, or largest absolute residual for regression) 2. Profile these error cases: - What is the distribution of their feature values compared to correctly predicted cases? - Are they concentrated in a specific subgroup, time period, or region? - Do they share a common pattern in the raw data? 3. Cluster the error cases using k-means (k=3–5) — describe what characterizes each error cluster 4. For each cluster, propose a specific model improvement: more training data of that type, a new feature, a separate model for that segment, or a data quality fix 5. Estimate: if the top error cluster were fixed, how much would overall model performance improve? Return the error profile table, cluster descriptions, and prioritized improvement recommendations.
When to use this prompt
Topline metrics hide concentrated failure pockets.
You need concrete ideas for the next iteration based on real errors.
You want error cases grouped into interpretable patterns.
What the AI should return
A profile of the worst predictions, clusters of error cases with descriptions, likely causes, and prioritized recommendations for the improvements most likely to reduce future errors.
How to use this prompt
Open your data context
Load your dataset, notebook, or working environment so the AI can operate on the actual project context.
Copy the prompt text
Use the copy button above and paste the prompt into the AI assistant or prompt input area.
Review the output critically
Check whether the result matches your data, assumptions, and desired format before moving on.
Chain into the next prompt
Once you have the first result, continue deeper with related prompts in Model Evaluation.
Frequently asked questions
What does the Error Analysis prompt do?+
It gives you a structured model evaluation starting point for data scientist work and helps you move faster without starting from a blank page.
Who is this prompt for?+
It is designed for data scientist workflows and marked as advanced, so it works well as a guided starting point for that level of experience.
What type of prompt is this?+
Error Analysis is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.
Can I use this outside MLJAR Studio?+
Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.
What should I open next?+
Natural next steps from here are Calibration Analysis, Classification Report, Cross-Validation Deep Dive.