Breast Cancer Diagnosis with SVM in Python
Load the Breast Cancer Wisconsin dataset, train an SVM classifier, and visualize PCA-reduced decision regions using an AI data analyst.
What this AI workflow does
This AI Data Analyst workflow loads the Breast Cancer Wisconsin dataset from scikit-learn and summarizes the class balance. It scales features, applies PCA to two components, and visualizes the 2D projection and decision regions. It trains an SVM classifier and reports accuracy, a classification report, and a confusion matrix.
Who this example is for
This is for data science learners and practitioners who want a compact, reproducible example of SVM classification on a standard medical dataset. It helps anyone who needs a reference for preprocessing, dimensionality reduction, and evaluation in a conversational, code-generating notebook workflow.
Expected analysis outcomes
These are the results the AI workflow is expected to generate.
- Class balance table showing malignant 212 and benign 357
- 2D PCA scatter plot with two visible clusters
- SVM accuracy around 0.97 with a classification report
- Confusion matrix plot with false positive and false negative counts
Tools and libraries used
Main Python packages and tooling used to run this AI data analysis task.
- scikit-learn
- pandas
- numpy
- matplotlib
- seaborn
Prompt sequence
This is the exact list of prompts used in this workflow. The same prompt sequence is sent to each model so outputs and scores can be compared fairly.
- 1load breast cancer dataset (from sklearn) and show class balance
- 2scale features and reduce to 2 components with PCA, plot the result
- 3train an SVM classifier and show accuracy and classification report
- 4plot the confusion matrix
Model Comparison
We compared several LLM models on the same analysis task. The table below shows their scores side by side. You can also open each model run to check the full conversation and notebook results.
| Model Source | Score | Task | Execution | Output | Reasoning | Reliability | Run |
|---|---|---|---|---|---|---|---|
| gemma4:31b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gemma4:31b conversation |
| glm-5.1 | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open glm-5.1 conversation |
| gpt-5.4 | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gpt-5.4 conversation |
| gpt-oss:120b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gpt-oss:120b conversation |
| qwen3-coder-next | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open qwen3-coder-next conversation |
| qwen3.5:397b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open qwen3.5:397b conversation |
How scoring works
Each run is graded on five dimensions:
- Task Completion (0-2): how fully the workflow answered the requested analysis task and completed required steps.
- Execution Correctness (0-2): whether generated code is correct, consistent, and likely runnable without major fixes.
- Output Quality (0-3): how well tables, charts, and results match the expected analytical outcomes.
- Reasoning Quality (0-2): clarity and correctness of interpretation, explanation, and conclusions.
- Reliability (0-1): robustness of the run, including low hallucination risk and stable behavior.
The final score is the sum of these dimensions, so total score = 0-10. Higher values indicate more complete, correct, and reliable notebook results for this workflow.
AI Analysis with qwen3.5:397b
This run uses AI with a step-by-step list of prompts to perform the analysis task Breast Cancer Diagnosis with SVM in Python. Below, you can review the full conversation generated with qwen3.5:397b. After the conversation, we show detailed score cards for this run.
Score cards for qwen3.5:397b for Breast Cancer Diagnosis with SVM in Python
Total score: 10/10
Task Completion: 2/2
ExcellentAll four requested steps were completed: class balance, PCA visualization, SVM training with accuracy/report, and confusion matrix.
Execution Correctness: 2/2
ExcellentThe provided code is coherent and likely runnable, using standard sklearn and plotting steps without obvious syntax or logic errors.
Output Quality: 3/3
ExcellentOutputs match the expected outcomes: class counts 357/212, a PCA plot, accuracy 0.9825, and a confusion matrix with class-wise errors shown.
Reasoning Quality: 2/2
ExcellentThe workflow follows a sensible ML pipeline and the assistant’s summaries correctly interpret the observed results.
Reliability: 1/1
ExcellentThe workflow is consistent and uses standard methods; no hallucinated steps or unstable logic are evident.
Try MLJAR Studio
Run the same type of AI-powered data analysis on your own datasets with conversational notebooks in MLJAR Studio.
Explore More AI Analysis Examples
Discover additional workflows across categories. Each example includes prompts, conversation outputs, and model-level scoring so you can compare approaches and results.