Data Cleaning with Pandas in Python

Clean a messy real-world dataset: handle missing values, fix dtypes, remove duplicates, and standardize columns using pandas and an AI data analyst.

This page shows an AI data analysis example. A workflow is a list of prompts (instructions) used to complete one analysis task step by step. We send the same prompt list to multiple AI models, then compare their notebook outputs and scores.

What this AI workflow does

This AI Data Analyst workflow loads the World Happiness Report 2024 CSV and profiles data quality issues such as missing values, incorrect dtypes, and duplicate rows. It cleans the dataset by dropping columns with more than 50% missingness, filling remaining numeric nulls with column medians, and converting numeric-looking strings to floats. It standardizes column names to snake_case and produces a before/after summary comparing shape, dtypes, and null counts.

Who this example is for

This is for analysts and data scientists who need a repeatable pandas-based cleaning routine for a real-world CSV. It helps anyone preparing data for downstream analysis by making cleaning steps explicit and verifiable with before/after checks.

Expected analysis outcomes

These are the results the AI workflow is expected to generate.

  • Data quality report covering missing values, dtype problems, and duplicate rows
  • Columns with >50% missing values removed
  • Numeric missing values filled with column medians
  • Snake_case column names and corrected numeric dtypes
  • Before/after comparison of dataset shape and null counts

Tools and libraries used

Main Python packages and tooling used to run this AI data analysis task.

  • pandas
  • numpy
  • python
  • jupyter

Prompt sequence

This is the exact list of prompts used in this workflow. The same prompt sequence is sent to each model so outputs and scores can be compared fairly.

  1. 1load the dataset https://raw.githubusercontent.com/pplonski/datasets-for-start/refs/heads/master/world_happiness_report/WHR_2024.csv and identify data quality issues: missing values, wrong dtypes, duplicates
  2. 2fill missing numeric values with column medians and drop columns with >50% missing
  3. 3rename columns to snake_case and convert any string numbers to float
  4. 4show a before/after summary of the cleaned dataset

Model Comparison

We compared several LLM models on the same analysis task. The table below shows their scores side by side. You can also open each model run to check the full conversation and notebook results.

Model SourceScoreTaskExecutionOutputReasoningReliabilityRun
gpt-5.48/102/22/22/31/21/1Open gpt-5.4 conversation
glm-5.17/101/22/22/31/21/1Open glm-5.1 conversation
gpt-oss:120b6/102/21/22/31/20/1Open gpt-oss:120b conversation
gemma4:31b5/101/21/22/31/20/1Open gemma4:31b conversation
qwen3.5:397b5/101/21/22/31/20/1Open qwen3.5:397b conversation
qwen3-coder-next3/101/20/21/31/20/1Open qwen3-coder-next conversation
How scoring works

Each run is graded on five dimensions:

  • Task Completion (0-2): how fully the workflow answered the requested analysis task and completed required steps.
  • Execution Correctness (0-2): whether generated code is correct, consistent, and likely runnable without major fixes.
  • Output Quality (0-3): how well tables, charts, and results match the expected analytical outcomes.
  • Reasoning Quality (0-2): clarity and correctness of interpretation, explanation, and conclusions.
  • Reliability (0-1): robustness of the run, including low hallucination risk and stable behavior.

The final score is the sum of these dimensions, so total score = 0-10. Higher values indicate more complete, correct, and reliable notebook results for this workflow.

AI Analysis with glm-5.1

This run uses AI with a step-by-step list of prompts to perform the analysis task Data Cleaning with Pandas in Python. Below, you can review the full conversation generated with glm-5.1. After the conversation, we show detailed score cards for this run.

load the dataset https://raw.githubusercontent.com/pplonski/datasets-for-start/refs/heads/master/world_happiness_report/WHR_2024.csv and identify data quality issues: missing values, wrong dtypes, duplicates
Thinking

The user wants me to load a dataset and identify data quality issues. Let me start by loading the dataset and inspecting it.

Let me load the dataset and do a thorough quality check.

Check data quality
import pandas as pd

df = pd.read_csv("https://raw.githubusercontent.com/pplonski/datasets-for-start/refs/heads/master/world_happiness_report/WHR_2024.csv")

# Basic shape and structure
shape = df.shape
columns = df.dtypes

# Missing values per column
missing = df.isnull().sum()
missing_pct = (df.isnull().sum() / len(df) * 100).round(2)

# Duplicate rows
duplicates = df.duplicated().sum()

# Combine missing info into a nice table
quality_df = pd.DataFrame({
    "dtype": columns,
    "missing_count": missing,
    "missing_pct": missing_pct
})

print(f"Shape: {shape[0]} rows x {shape[1]} columns")
print(f"Duplicate rows: {duplicates}")
print()
quality_df
Shape: 143 rows x 9 columns
Duplicate rows: 0
dtype missing_count missing_pct
country str 0 0.0
region str 6 4.2
happiness_score float64 0 0.0
gdp_per_capita float64 3 2.1
social_support float64 3 2.1
healthy_life_expectancy float64 3 2.1
freedom_to_make_life_choices float64 3 2.1
generosity float64 3 2.1
perceptions_of_corruption float64 3 2.1
Thinking

The output already shows a comprehensive data quality summary. Let me analyze what we have:

  1. Shape: 143 rows × 9 columns

  2. Duplicates: 0 duplicate rows

  3. Missing values:

    • region: 6 missing (4.2%)
    • gdp_per_capita: 3 missing (2.1%)
    • social_support: 3 missing (2.1%)
    • healthy_life_expectancy: 3 missing (2.1%)
    • freedom_to_make_life_choices: 3 missing (2.1%)
    • generosity: 3 missing (2.1%)
    • perceptions_of_corruption: 3 missing (2.1%)
    • country: 0 missing
    • happiness_score: 0 missing
  4. Dtypes:

    • country: str (object)
    • region: str (object)
    • All others: float64

The dtypes look reasonable. country and region are strings, which is correct. The numeric columns are float64, which is correct. No obvious wrong dtypes.

However, I should check whether the numeric columns that have 3 missing values each are missing for the same 3 rows (which would make sense - those countries might have no data at all for those metrics). Also, I should check if there are any suspicious values in the numeric columns (like negative values where they shouldn't be, etc.). But the user only asked about missing values, wrong dtypes, and duplicates - which we have answers for.

The output is sufficient to answer the user's original goal. Let me provide a final answer.

Wait - I should double-check: are the dtypes really "correct"? country as str and region as str are fine. All numeric columns as float64 are fine. There don't seem to be any wrong dtypes.

I think I have enough information to provide a final answer.

Here's a summary of the data quality findings:

Missing Values

  • 7 out of 9 columns have missing data.
  • region has the most: 6 missing values (4.2%).
  • Six numeric columns (gdp_per_capita, social_support, healthy_life_expectancy, freedom_to_make_life_choices, generosity, perceptions_of_corruption) each have exactly 3 missing values (2.1%). This consistent count across all six suggests the same 3 rows may be entirely empty for these features — worth investigating whether those countries simply lack survey data.
  • country and happiness_score are complete (0 missing).

Wrong Dtypes

  • No issues found. country and region are correctly typed as strings. All six metric columns plus happiness_score are float64, which is appropriate for continuous scores. No numeric-looking columns stored as strings, and no obvious type mismatches.

Duplicates

  • 0 duplicate rows. Clean on this front.

Potential concern to watch: The 6 missing region values could make regional grouping unreliable for those countries. And the 3 rows missing all numeric metrics may need to be dropped or imputed depending on the analysis. Would you like me to dig into which specific countries are affected?

fill missing numeric values with column medians and drop columns with >50% missing
Thinking

The user wants to:

  1. Fill missing numeric values with column medians
  2. Drop columns with >50% missing values

From the quality report, no column has >50% missing (the highest is region at 4.2%), so no columns will be dropped. Let me execute this step.

No columns exceed the 50% missing threshold (the highest is region at 4.2%), so none will be dropped. Let me fill the numeric missing values with column medians.

Fill missing values with medians
# Identify numeric columns with missing values
numeric_cols = df.select_dtypes(include="number").columns
medians = df[numeric_cols].median()

# Fill missing numeric values with column medians
df[numeric_cols] = df[numeric_cols].fillna(medians)

# Check remaining missing values
remaining_missing = df.isnull().sum()
remaining_missing
country                         0
region                          6
happiness_score                 0
gdp_per_capita                  0
social_support                  0
healthy_life_expectancy         0
freedom_to_make_life_choices    0
generosity                      0
perceptions_of_corruption       0
dtype: int64
Thinking

The original user goal was to "fill missing numeric values with column medians and drop columns with >50% missing." Looking at the latest output, both parts have been addressed:

  1. Missing numeric values have been filled with column medians - the output shows 0 remaining missing values in all numeric columns.
  2. No columns had >50% missing (the highest was region at 4.2%), so none were dropped.

The task is complete. I should provide a natural final answer with Path A.

Both parts of the goal are complete based on the observed output:

  1. Numeric missing values filled with medians: All seven numeric columns now show 0 missing values. The median imputation worked correctly for gdp_per_capita, social_support, healthy_life_expectancy, freedom_to_make_life_choices, generosity, and perceptions_of_corruption (each had 3 missing), plus happiness_score (already complete).

  2. Columns with >50% missing dropped: No columns exceeded the 50% missing threshold — the highest was region at only 4.2% (6 missing out of 143). So no columns were removed by this rule. The only remaining missing values are the 6 in region, which is a non-numeric column and was not subject to median imputation.

The dataset now has 143 rows × 9 columns, with zero missing numeric values and only region still carrying 6 nulls.

rename columns to snake_case and convert any string numbers to float
Thinking

The user wants me to:

  1. Rename columns to snake_case
  2. Convert any string numbers to float

Let me look at the current columns from the notebook state:

  • country
  • region
  • happiness_score
  • gdp_per_capita
  • social_support
  • healthy_life_expectancy
  • freedom_to_make_life_choices
  • generosity
  • perceptions_of_corruption

Most of these are already in snake_case. Let me check if there are any that aren't... Looking at the original column names from the quality_df output, they seem to already be in snake_case format.

Wait, let me re-read. The columns shown in output-1 are already snake_case. But maybe the actual DataFrame columns have different casing? Let me look more carefully...

Actually, from the output-1, the column names listed are:

  • country
  • region
  • happiness_score
  • gdp_per_capita
  • social_support
  • healthy_life_expectancy
  • freedom_to_make_life_choices
  • generosity
  • perceptions_of_corruption

These all appear to be in snake_case already. So renaming might not change much, but I should still apply the conversion to be safe.

For converting string numbers to float - I need to check which columns have string dtype but contain numeric values. From output-1:

  • country: str
  • region: str

The rest are float64. So I need to check if country or region contain string numbers that could be converted to float. Likely country is actual country names and region is actual region names, but I should check.

Actually, looking at the dtypes: country is str and region is str. It's unlikely these contain numeric values, but I should still write code that:

  1. Converts column names to snake_case (they already seem to be)
  2. Attempts to convert string columns that contain numeric-looking values to float

Let me write the code step. I'll rename columns to snake_case and try to convert any string columns that contain parseable numbers to float.

For snake_case conversion, I can use a regex approach: replace spaces and special characters with underscores, convert to lowercase.

Since the columns already appear snake_case, the rename step is straightforward. For the string-to-float conversion, I should attempt pd.to_numeric on string columns with errors='coerce', and only convert if it makes sense (i.e., not converting actual text columns to NaN).

Let me write clean code for this.

The current columns already look mostly snake_case, but I'll apply a proper conversion to be safe. I'll also attempt to coerce any string columns that contain parseable numbers into float dtype.

Clean column names and types
import re

# Rename columns to snake_case: lowercase, replace non-alphanumeric runs with _
df.columns = [re.sub(r'[^a-zA-Z0-9]+', '_', col).strip('_').lower() for col in df.columns]

# Try converting string columns that actually contain numeric values to float
str_cols = df.select_dtypes(include="object").columns
for col in str_cols:
    numeric_version = pd.to_numeric(df[col], errors="coerce")
    # Only convert if most values parsed successfully (avoid turning real text into NaN)
    if numeric_version.notna().sum() > 0 and numeric_version.notna().sum() >= len(df) * 0.5:
        df[col] = numeric_version

df.dtypes
/var/folders/pp/x90pfm0140d4glxshn77bthh0000gn/T/ipykernel_7160/896524751.py:7: Pandas4Warning: For backward compatibility, 'str' dtypes are included by select_dtypes when 'object' dtype is specified. This behavior is deprecated and will be removed in a future version. Explicitly pass 'str' to `include` to select them, or to `exclude` to remove them and silence this warning.
See https://pandas.pydata.org/docs/user_guide/migration-3-strings.html#string-migration-select-dtypes for details on how to write code that works with pandas 2 and 3.
  str_cols = df.select_dtypes(include="object").columns
country                             str
region                              str
happiness_score                 float64
gdp_per_capita                  float64
social_support                  float64
healthy_life_expectancy         float64
freedom_to_make_life_choices    float64
generosity                      float64
perceptions_of_corruption       float64
dtype: object
Thinking

The user's goal was to rename columns to snake_case and convert any string numbers to float. Let me analyze the latest output (cell-3):

  1. Rename columns to snake_case: The columns were already in snake_case format (e.g., country, region, happiness_score, gdp_per_capita, etc.). The regex transformation confirmed this - no changes were needed since the columns were already properly formatted.

  2. Convert string numbers to float: The two string columns are country and region. The code tried to convert them using pd.to_numeric(), but neither column contained parseable numeric values (they're actual text like country names and region names), so they remained as str dtype. This is correct behavior - there are no "string numbers" in this dataset that need conversion.

The output confirms that both country and region stayed as str, and all numeric columns are already float64. There are no string columns containing numeric values that need conversion.

This is sufficient to answer the goal - the columns are already in snake_case, and there are no string numbers to convert.

Both parts of the goal have been addressed:

  1. Snake_case rename: The columns were already in snake_case format — the regex transformation confirmed this, leaving names unchanged (country, region, happiness_score, gdp_per_capita, social_support, healthy_life_expectancy, freedom_to_make_life_choices, generosity, perceptions_of_corruption).

  2. String numbers to float: The only two string columns are country and region, which contain actual text (country names and region labels) — no parseable numeric values. The conversion attempt correctly left them as str. All other columns are already float64.

No further action is needed on this dataset for this goal.

show a before/after summary of the cleaned dataset
Thinking

The user wants a before/after summary of the cleaned dataset. I need to compare the original state of the dataframe (as loaded) with the current cleaned state. Since the dataframe has been modified in place, I'll need to re-read the original CSV to get the "before" state, then compare it to the current "after" state.

Let me create a summary comparing:

  • Column names (before/after snake_case rename)
  • Dtypes (before/after)
  • Missing values (before/after)
  • Duplicate rows

I'll re-read the original data to get the "before" state.

I'll re-read the original CSV to capture the "before" state, then compare it against the current cleaned df.

Compare original and cleaned data
# Re-read original to get "before" state
orig = pd.read_csv("https://raw.githubusercontent.com/pplonski/datasets-for-start/refs/heads/master/world_happiness_report/WHR_2024.csv")

before = pd.DataFrame({
    "column": orig.columns,
    "original_dtype": orig.dtypes.values,
    "original_missing": orig.isnull().sum().values
})

after = pd.DataFrame({
    "column": df.columns,
    "cleaned_dtype": df.dtypes.values,
    "cleaned_missing": df.isnull().sum().values
})

summary = pd.concat([before, after.drop(columns="column")], axis=1)
summary
column original_dtype original_missing cleaned_dtype cleaned_missing
0 country str 0 str 0
1 region str 6 str 6
2 happiness_score float64 0 float64 0
3 gdp_per_capita float64 3 float64 0
4 social_support float64 3 float64 0
5 healthy_life_expectancy float64 3 float64 0
6 freedom_to_make_life_choices float64 3 float64 0
7 generosity float64 3 float64 0
8 perceptions_of_corruption float64 3 float64 0
Thinking

The user wants a before/after summary of the cleaned dataset. The latest output (cell-4) already shows a before/after summary table. Let me analyze it:

  • Columns: column, original_dtype, original_missing, cleaned_dtype, cleaned_missing
  • The output is truncated but shows the key information

The output already answers the goal. Let me provide insights based on what's visible.

Key observations from the output:

  1. Column names were already snake_case - no changes needed
  2. No dtype changes occurred (str stayed str, float64 stayed float64)
  3. Missing values were reduced from 3 each in 6 numeric columns to 0 (median imputation)
  4. region still has 6 missing values (text column, not imputed)

The summary table is already produced. I should provide Path A with final insights.

The before/after summary is already displayed. Here's what it tells us:

Column names: All 9 columns were already in snake_case, so no renames were needed.

Dtypes: No conversions occurred — country and region are genuinely text columns and were correctly left as str. The remaining 7 columns were already float64.

Missing values — the main cleaning impact:

  • region still has 6 missing entries — these are text labels and weren't imputed.
  • The 6 numeric columns (gdp_per_capita, social_support, healthy_life_expectancy, freedom_to_make_life_choices, generosity, perceptions_of_corruption) each went from 3 missing → 0, filled with column medians.
  • happiness_score had 0 missing throughout — no action needed.

In short, the only material change was median-imputing 18 total missing numeric values across 6 columns. The string-number conversion step found no columns to transform, and the snake_case rename was a no-op.

Score cards for glm-5.1 for Data Cleaning with Pandas in Python

Total score: 7/10

  • Task Completion: 1/2

    Needs work

    The workflow completed the initial quality check and median imputation, but it did not clearly demonstrate a full before/after cleaned dataset summary or any columns dropped for >50% missing. The snake_case step was effectively a no-op because the columns were already standardized.

  • Execution Correctness: 2/2

    Excellent

    The code shown is syntactically valid and the outputs are consistent with the described operations. The median fill and dtype inspection appear runnable and coherent.

  • Output Quality: 2/3

    Good

    It produced a useful quality report with missing counts, dtypes, duplicates, and post-cleaning null counts reduced to zero for numeric columns. However, the before/after summary is only partially evidenced and the remaining missing region values are not addressed.

  • Reasoning Quality: 1/2

    Needs work

    The reasoning correctly identifies that no columns exceed the 50% missing threshold and that the numeric columns can be median-imputed. It is somewhat shallow and overstates completion despite the remaining missing non-numeric values.

  • Reliability: 1/1

    Excellent

    The workflow is mostly consistent and avoids obvious hallucinations, but it includes a pandas warning and a potentially unnecessary string-to-float conversion heuristic. It is reasonably robust overall.

Try MLJAR Studio

Run the same type of AI-powered data analysis on your own datasets with conversational notebooks in MLJAR Studio.

Explore More AI Analysis Examples

Discover additional workflows across categories. Each example includes prompts, conversation outputs, and model-level scoring so you can compare approaches and results.