How to Analyze a CSV File in Python
A step-by-step AI data analyst session: load a CSV, inspect structure, handle missing values, and generate a full exploratory summary.
What this AI workflow does
This AI Data Analyst workflow loads the Superstore Sales CSV from a URL and inspects its structure with shape, column dtypes, and a preview of the first rows. It checks for missing values and reports counts by column, then computes summary statistics for numeric fields. It generates distribution plots for Sales, Profit, and Shipping Cost to support exploratory analysis.
Who this example is for
This is for analysts and students who need a repeatable way to profile a new CSV dataset in Python. It helps anyone who wants an AI-assisted notebook that produces both tabular summaries and basic distribution visualizations.
Expected analysis outcomes
These are the results the AI workflow is expected to generate.
- Dataset shape, dtypes, and first 5 rows
- Missing value counts by column
- Summary statistics for numeric columns via describe()
- Histograms with KDE for Sales, Profit, and Shipping Cost
Tools and libraries used
Main Python packages and tooling used to run this AI data analysis task.
- pandas
- numpy
- matplotlib
- seaborn
- requests
Prompt sequence
This is the exact list of prompts used in this workflow. The same prompt sequence is sent to each model so outputs and scores can be compared fairly.
- 1load the CSV file https://raw.githubusercontent.com/pplonski/datasets-for-start/refs/heads/master/superstore-sales/superstore_dataset2011-2015.csv and show shape, dtypes and first 5 rows
- 2check for missing values and show counts by column
- 3show summary statistics for all numeric columns
- 4plot distributions for the top 3 numeric columns Sales, Profit, Shipping Cost
Model Comparison
We compared several LLM models on the same analysis task. The table below shows their scores side by side. You can also open each model run to check the full conversation and notebook results.
| Model Source | Score | Task | Execution | Output | Reasoning | Reliability | Run |
|---|---|---|---|---|---|---|---|
| gemma4:31b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gemma4:31b conversation |
| gpt-oss:120b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gpt-oss:120b conversation |
| glm-5.1 | 9/10 | 2/2 | 2/2 | 3/3 | 1/2 | 1/1 | Open glm-5.1 conversation |
| gpt-5.4 | 9/10 | 2/2 | 2/2 | 3/3 | 1/2 | 1/1 | Open gpt-5.4 conversation |
| qwen3-coder-next | 9/10 | 2/2 | 2/2 | 3/3 | 1/2 | 1/1 | Open qwen3-coder-next conversation |
| qwen3.5:397b | 4/10 | 1/2 | 1/2 | 1/3 | 1/2 | 0/1 | Open qwen3.5:397b conversation |
How scoring works
Each run is graded on five dimensions:
- Task Completion (0-2): how fully the workflow answered the requested analysis task and completed required steps.
- Execution Correctness (0-2): whether generated code is correct, consistent, and likely runnable without major fixes.
- Output Quality (0-3): how well tables, charts, and results match the expected analytical outcomes.
- Reasoning Quality (0-2): clarity and correctness of interpretation, explanation, and conclusions.
- Reliability (0-1): robustness of the run, including low hallucination risk and stable behavior.
The final score is the sum of these dimensions, so total score = 0-10. Higher values indicate more complete, correct, and reliable notebook results for this workflow.
AI Analysis with gemma4:31b
This run uses AI with a step-by-step list of prompts to perform the analysis task How to Analyze a CSV File in Python. Below, you can review the full conversation generated with gemma4:31b. After the conversation, we show detailed score cards for this run.
Score cards for gemma4:31b for How to Analyze a CSV File in Python
Total score: 10/10
Task Completion: 2/2
ExcellentAll four requested steps were completed: loading the CSV, checking missing values, showing numeric summary statistics, and plotting three distributions.
Execution Correctness: 2/2
ExcellentThe provided code is syntactically valid and the outputs shown are consistent with successful execution.
Output Quality: 3/3
ExcellentOutputs match the expected outcomes semantically: shape, dtypes, head, missing-value counts, describe() statistics, and three KDE histograms are all present.
Reasoning Quality: 2/2
ExcellentThe workflow reasoning is clear and aligned with each prompt, with appropriate interpretation of the outputs.
Reliability: 1/1
ExcellentThe workflow is consistent and robust overall, with no evident hallucinations or unsafe behavior.
Try MLJAR Studio
Run the same type of AI-powered data analysis on your own datasets with conversational notebooks in MLJAR Studio.
Explore More AI Analysis Examples
Discover additional workflows across categories. Each example includes prompts, conversation outputs, and model-level scoring so you can compare approaches and results.