Data Cleaning with Pandas in Python
Clean a messy real-world dataset: handle missing values, fix dtypes, remove duplicates, and standardize columns using pandas and an AI data analyst.
What this AI workflow does
This AI Data Analyst workflow loads the World Happiness Report 2024 CSV and profiles data quality issues such as missing values, incorrect dtypes, and duplicate rows. It cleans the dataset by dropping columns with more than 50% missingness, filling remaining numeric nulls with column medians, and converting numeric-looking strings to floats. It standardizes column names to snake_case and produces a before/after summary comparing shape, dtypes, and null counts.
Who this example is for
This is for analysts and data scientists who need a repeatable pandas-based cleaning routine for a real-world CSV. It helps anyone preparing data for downstream analysis by making cleaning steps explicit and verifiable with before/after checks.
Expected analysis outcomes
These are the results the AI workflow is expected to generate.
- Data quality report covering missing values, dtype problems, and duplicate rows
- Columns with >50% missing values removed
- Numeric missing values filled with column medians
- Snake_case column names and corrected numeric dtypes
- Before/after comparison of dataset shape and null counts
Tools and libraries used
Main Python packages and tooling used to run this AI data analysis task.
- pandas
- numpy
- python
- jupyter
Prompt sequence
This is the exact list of prompts used in this workflow. The same prompt sequence is sent to each model so outputs and scores can be compared fairly.
- 1load the dataset https://raw.githubusercontent.com/pplonski/datasets-for-start/refs/heads/master/world_happiness_report/WHR_2024.csv and identify data quality issues: missing values, wrong dtypes, duplicates
- 2fill missing numeric values with column medians and drop columns with >50% missing
- 3rename columns to snake_case and convert any string numbers to float
- 4show a before/after summary of the cleaned dataset
Model Comparison
We compared several LLM models on the same analysis task. The table below shows their scores side by side. You can also open each model run to check the full conversation and notebook results.
| Model Source | Score | Task | Execution | Output | Reasoning | Reliability | Run |
|---|---|---|---|---|---|---|---|
| gpt-5.4 | 8/10 | 2/2 | 2/2 | 2/3 | 1/2 | 1/1 | Open gpt-5.4 conversation |
| glm-5.1 | 7/10 | 1/2 | 2/2 | 2/3 | 1/2 | 1/1 | Open glm-5.1 conversation |
| gpt-oss:120b | 6/10 | 2/2 | 1/2 | 2/3 | 1/2 | 0/1 | Open gpt-oss:120b conversation |
| gemma4:31b | 5/10 | 1/2 | 1/2 | 2/3 | 1/2 | 0/1 | Open gemma4:31b conversation |
| qwen3.5:397b | 5/10 | 1/2 | 1/2 | 2/3 | 1/2 | 0/1 | Open qwen3.5:397b conversation |
| qwen3-coder-next | 3/10 | 1/2 | 0/2 | 1/3 | 1/2 | 0/1 | Open qwen3-coder-next conversation |
How scoring works
Each run is graded on five dimensions:
- Task Completion (0-2): how fully the workflow answered the requested analysis task and completed required steps.
- Execution Correctness (0-2): whether generated code is correct, consistent, and likely runnable without major fixes.
- Output Quality (0-3): how well tables, charts, and results match the expected analytical outcomes.
- Reasoning Quality (0-2): clarity and correctness of interpretation, explanation, and conclusions.
- Reliability (0-1): robustness of the run, including low hallucination risk and stable behavior.
The final score is the sum of these dimensions, so total score = 0-10. Higher values indicate more complete, correct, and reliable notebook results for this workflow.
AI Analysis with qwen3-coder-next
This run uses AI with a step-by-step list of prompts to perform the analysis task Data Cleaning with Pandas in Python. Below, you can review the full conversation generated with qwen3-coder-next. After the conversation, we show detailed score cards for this run.
Score cards for qwen3-coder-next for Data Cleaning with Pandas in Python
Total score: 3/10
Task Completion: 1/2
Needs workThe workflow identifies missing values and duplicates, and it attempts cleaning and a before/after summary. However, it does not correctly complete the requested cleaning logic because it mishandles string columns and the summary is inconsistent.
Execution Correctness: 0/2
Needs workThe code contains a major error: `pd.to_numeric(..., errors='coerce')` converts `country` and `region` to `float64`, which is incorrect for string columns. The before/after summary is also wrong for `country` and `region` missing counts.
Output Quality: 1/3
Needs workSome outputs are correct, such as the initial missing-value and duplicate checks and the final zero-null report for numeric columns. But the final summary is incomplete/truncated and contains contradictory or incorrect values, so it does not fully match the expected outcomes.
Reasoning Quality: 1/2
Needs workThe reasoning correctly notes that no columns exceed 50% missing and that duplicates are absent, but it also makes unsupported claims about restoring string columns and presents inconsistent shape/missing-value statements. The logic around dtype conversion is shallow and partly misleading.
Reliability: 0/1
Needs workThe workflow is fragile because it applies numeric conversion to all object columns, causing incorrect dtype changes, and then relies on inconsistent manual corrections and summaries. This makes the result unreliable.
Try MLJAR Studio
Run the same type of AI-powered data analysis on your own datasets with conversational notebooks in MLJAR Studio.
Explore More AI Analysis Examples
Discover additional workflows across categories. Each example includes prompts, conversation outputs, and model-level scoring so you can compare approaches and results.