Iris Species Classification with Decision Tree
Train a decision tree classifier on the Iris dataset, evaluate accuracy, and visualize the decision boundaries using an AI data analyst.
What this AI workflow does
This AI Data Analyst workflow loads the Iris dataset from scikit-learn and creates an 80/20 train-test split. It trains a decision tree classifier, reports accuracy, and generates a classification report. It also plots a confusion matrix and visualizes feature importances as a bar chart.
Who this example is for
This is for learners and practitioners who want a compact, reproducible example of multiclass classification with a decision tree. It helps validate model performance with standard metrics and interpret which Iris features drive predictions.
Expected analysis outcomes
These are the results the AI workflow is expected to generate.
- Train/test split with dataset shapes printed
- Decision tree model trained with accuracy and classification report
- Confusion matrix heatmap for error inspection
- Feature importance bar chart highlighting the most influential features
Tools and libraries used
Main Python packages and tooling used to run this AI data analysis task.
- scikit-learn
- pandas
- numpy
- matplotlib
- seaborn
Prompt sequence
This is the exact list of prompts used in this workflow. The same prompt sequence is sent to each model so outputs and scores can be compared fairly.
- 1load iris dataset (from sklearn) and split into train/test sets with 80/20 ratio
- 2train a decision tree classifier and show accuracy
- 3plot the confusion matrix
- 4show feature importances as a bar chart
Model Comparison
We compared several LLM models on the same analysis task. The table below shows their scores side by side. You can also open each model run to check the full conversation and notebook results.
| Model Source | Score | Task | Execution | Output | Reasoning | Reliability | Run |
|---|---|---|---|---|---|---|---|
| gemma4:31b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gemma4:31b conversation |
| glm-5.1 | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open glm-5.1 conversation |
| gpt-5.4 | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gpt-5.4 conversation |
| gpt-oss:120b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gpt-oss:120b conversation |
| qwen3-coder-next | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open qwen3-coder-next conversation |
| qwen3.5:397b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open qwen3.5:397b conversation |
How scoring works
Each run is graded on five dimensions:
- Task Completion (0-2): how fully the workflow answered the requested analysis task and completed required steps.
- Execution Correctness (0-2): whether generated code is correct, consistent, and likely runnable without major fixes.
- Output Quality (0-3): how well tables, charts, and results match the expected analytical outcomes.
- Reasoning Quality (0-2): clarity and correctness of interpretation, explanation, and conclusions.
- Reliability (0-1): robustness of the run, including low hallucination risk and stable behavior.
The final score is the sum of these dimensions, so total score = 0-10. Higher values indicate more complete, correct, and reliable notebook results for this workflow.
AI Analysis with gpt-oss:120b
This run uses AI with a step-by-step list of prompts to perform the analysis task Iris Species Classification with Decision Tree. Below, you can review the full conversation generated with gpt-oss:120b. After the conversation, we show detailed score cards for this run.
Score cards for gpt-oss:120b for Iris Species Classification with Decision Tree
Total score: 10/10
Task Completion: 2/2
ExcellentAll major requested steps were completed: train/test split, decision tree training with accuracy, confusion matrix plot, and feature-importance bar chart.
Execution Correctness: 2/2
ExcellentThe code shown is coherent and likely runnable; the outputs confirm successful execution of the split, model fit, evaluation, and plots.
Output Quality: 3/3
ExcellentOutputs match the expected outcomes semantically, including the split shapes, accuracy, confusion matrix heatmap, and feature importances showing petal length and petal width as dominant.
Reasoning Quality: 2/2
ExcellentThe workflow uses appropriate ML steps and correctly interprets the feature-importance results. The explanations are clear and aligned with the task.
Reliability: 1/1
ExcellentThe workflow is consistent and robust overall, with only a minor deprecation warning that does not affect the results.
Try MLJAR Studio
Run the same type of AI-powered data analysis on your own datasets with conversational notebooks in MLJAR Studio.
Explore More AI Analysis Examples
Discover additional workflows across categories. Each example includes prompts, conversation outputs, and model-level scoring so you can compare approaches and results.