HR Employee Attrition Analysis in Python
Explore the IBM HR Analytics dataset to uncover attrition patterns by department, age, salary, and job satisfaction.
What this AI workflow does
This AI Data Analyst workflow loads the IBM HR Analytics attrition CSV from a URL, summarizes the dataset shape, and calculates the overall attrition rate. It generates visual comparisons of attrition rates by department and job role, and contrasts monthly income distributions for employees who left versus stayed. It also examines relationships between job satisfaction, work-life balance, and attrition using correlation analysis and a heatmap.
Who this example is for
This is for HR analysts and people analytics practitioners who need a reproducible way to explore attrition patterns in a standard benchmark dataset. It is also useful for data analysts learning exploratory analysis workflows that combine grouped summaries, distribution plots, and correlation checks.
Expected analysis outcomes
These are the results the AI workflow is expected to generate.
- Loaded dataset with shape (1470, 35) and computed overall attrition rate (16.1%)
- Bar chart of attrition rate by department and job role
- Box plot comparing monthly income for leavers vs stayers
- Correlation heatmap linking job satisfaction and work-life balance with attrition
Tools and libraries used
Main Python packages and tooling used to run this AI data analysis task.
- pandas
- numpy
- matplotlib
- seaborn
Prompt sequence
This is the exact list of prompts used in this workflow. The same prompt sequence is sent to each model so outputs and scores can be compared fairly.
- 1load HR attrition dataset from https://raw.githubusercontent.com/pplonski/datasets-for-start/refs/heads/master/employee_attrition/HR-Employee-Attrition-All.csv and show overall attrition rate
- 2plot attrition rate by department and job role
- 3compare monthly income distribution for employees who left vs stayed
- 4show correlation between job satisfaction, work-life balance and attrition
Model Comparison
We compared several LLM models on the same analysis task. The table below shows their scores side by side. You can also open each model run to check the full conversation and notebook results.
| Model Source | Score | Task | Execution | Output | Reasoning | Reliability | Run |
|---|---|---|---|---|---|---|---|
| gemma4:31b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gemma4:31b conversation |
| glm-5.1 | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open glm-5.1 conversation |
| gpt-5.4 | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gpt-5.4 conversation |
| gpt-oss:120b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gpt-oss:120b conversation |
| qwen3-coder-next | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open qwen3-coder-next conversation |
| qwen3.5:397b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open qwen3.5:397b conversation |
How scoring works
Each run is graded on five dimensions:
- Task Completion (0-2): how fully the workflow answered the requested analysis task and completed required steps.
- Execution Correctness (0-2): whether generated code is correct, consistent, and likely runnable without major fixes.
- Output Quality (0-3): how well tables, charts, and results match the expected analytical outcomes.
- Reasoning Quality (0-2): clarity and correctness of interpretation, explanation, and conclusions.
- Reliability (0-1): robustness of the run, including low hallucination risk and stable behavior.
The final score is the sum of these dimensions, so total score = 0-10. Higher values indicate more complete, correct, and reliable notebook results for this workflow.
AI Analysis with gemma4:31b
This run uses AI with a step-by-step list of prompts to perform the analysis task HR Employee Attrition Analysis in Python. Below, you can review the full conversation generated with gemma4:31b. After the conversation, we show detailed score cards for this run.
Score cards for gemma4:31b for HR Employee Attrition Analysis in Python
Total score: 10/10
Task Completion: 2/2
ExcellentAll four requested analyses were completed: dataset load and attrition rate, department/job role plot, income comparison, and correlation heatmap.
Execution Correctness: 2/2
ExcellentThe code shown is syntactically coherent and the outputs indicate it ran successfully, including the expected shape, rate, plots, and summary tables.
Output Quality: 3/3
ExcellentOutputs match the expected outcomes semantically: shape (1470, 35), attrition rate 16.12%, leavers earn less on average, and the correlation matrix supports the stated relationship. The department/job role summary also identifies Sales as highest and Research & Development as lowest among major groups.
Reasoning Quality: 2/2
ExcellentThe workflow uses appropriate EDA steps and correctly interprets the numeric summaries and correlations. Explanations are clear and aligned with the outputs.
Reliability: 1/1
ExcellentThe workflow is consistent and data-driven, with no obvious hallucinations or unsupported claims.
Try MLJAR Studio
Run the same type of AI-powered data analysis on your own datasets with conversational notebooks in MLJAR Studio.
Explore More AI Analysis Examples
Discover additional workflows across categories. Each example includes prompts, conversation outputs, and model-level scoring so you can compare approaches and results.