Sentiment Analysis of Amazon Reviews
Analyze sentiment in Amazon product reviews using VADER and TextBlob, visualize score distributions, and identify most positive and negative reviews.
What this AI workflow does
This AI Data Analyst workflow loads a sample of the Amazon Fine Food Reviews dataset from a URL and summarizes dataset shape and star rating distribution. It generates VADER sentiment scores for each review and adds them as new columns for analysis. It visualizes sentiment score distributions and their relationship to star ratings, then extracts the most positive and most negative review excerpts with scores.
Who this example is for
This is for analysts and data scientists who want a reproducible notebook pattern for sentiment scoring and basic validation against existing labels like star ratings. It is also useful for NLP learners comparing lexicon-based sentiment methods and reviewing edge cases by inspecting extreme examples.
Expected analysis outcomes
These are the results the AI workflow is expected to generate.
- Dataset shape and star rating histogram
- VADER sentiment scores appended as new columns
- Histogram of compound sentiment scores
- Scatter plot comparing sentiment scores to star ratings
- Three most positive and three most negative review excerpts with sentiment scores
Tools and libraries used
Main Python packages and tooling used to run this AI data analysis task.
- pandas
- nltk
- vaderSentiment
- textblob
- matplotlib
- seaborn
Prompt sequence
This is the exact list of prompts used in this workflow. The same prompt sequence is sent to each model so outputs and scores can be compared fairly.
- 1load the reviews dataset https://raw.githubusercontent.com/pplonski/datasets-for-start/refs/heads/master/amazon-fine-food-reviews/amazon_fine_food_reviews_10k.csv and show shape and rating distribution
- 2compute sentiment scores using VADER for each review
- 3plot sentiment score distribution and compare with star ratings
- 4show the 3 most positive and 3 most negative reviews
Model Comparison
We compared several LLM models on the same analysis task. The table below shows their scores side by side. You can also open each model run to check the full conversation and notebook results.
| Model Source | Score | Task | Execution | Output | Reasoning | Reliability | Run |
|---|---|---|---|---|---|---|---|
| gemma4:31b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gemma4:31b conversation |
| gpt-oss:120b | 10/10 | 2/2 | 2/2 | 3/3 | 2/2 | 1/1 | Open gpt-oss:120b conversation |
| qwen3-coder-next | 9/10 | 2/2 | 2/2 | 3/3 | 1/2 | 1/1 | Open qwen3-coder-next conversation |
| glm-5.1 | 8/10 | 2/2 | 2/2 | 3/3 | 1/2 | 0/1 | Open glm-5.1 conversation |
| gpt-5.4 | 8/10 | 1/2 | 2/2 | 2/3 | 2/2 | 1/1 | Open gpt-5.4 conversation |
| qwen3.5:397b | 4/10 | 1/2 | 1/2 | 1/3 | 1/2 | 0/1 | Open qwen3.5:397b conversation |
How scoring works
Each run is graded on five dimensions:
- Task Completion (0-2): how fully the workflow answered the requested analysis task and completed required steps.
- Execution Correctness (0-2): whether generated code is correct, consistent, and likely runnable without major fixes.
- Output Quality (0-3): how well tables, charts, and results match the expected analytical outcomes.
- Reasoning Quality (0-2): clarity and correctness of interpretation, explanation, and conclusions.
- Reliability (0-1): robustness of the run, including low hallucination risk and stable behavior.
The final score is the sum of these dimensions, so total score = 0-10. Higher values indicate more complete, correct, and reliable notebook results for this workflow.
AI Analysis with gpt-5.4
This run uses AI with a step-by-step list of prompts to perform the analysis task Sentiment Analysis of Amazon Reviews. Below, you can review the full conversation generated with gpt-5.4. After the conversation, we show detailed score cards for this run.
Score cards for gpt-5.4 for Sentiment Analysis of Amazon Reviews
Total score: 8/10
Task Completion: 1/2
Needs workThe workflow completed loading, rating distribution, VADER scoring, and identifying positive/negative reviews, but it did not show the requested histogram of compound scores or a scatter plot versus star rating.
Execution Correctness: 2/2
ExcellentThe code shown is coherent and likely runnable: it loads the CSV, computes VADER scores, joins them to the dataframe, and produces plots and review excerpts without obvious syntax errors.
Output Quality: 2/3
GoodIt correctly reports dataset shape, rating counts, adds VADER columns, and displays the 3 most positive and 3 most negative reviews. However, the expected compound-score histogram and star-rating comparison plot are missing, so the output is incomplete.
Reasoning Quality: 2/2
ExcellentThe explanations are generally correct and consistent with the outputs, including the skew toward 5-star reviews and the interpretation of VADER scores. The reasoning is somewhat shallow and does not address the missing requested visualizations.
Reliability: 1/1
ExcellentThe workflow is reasonably consistent and uses standard libraries and methods. It is somewhat fragile in presentation because it omits one of the core requested plots and relies on a boxplot instead.
Try MLJAR Studio
Run the same type of AI-powered data analysis on your own datasets with conversational notebooks in MLJAR Studio.
Explore More AI Analysis Examples
Discover additional workflows across categories. Each example includes prompts, conversation outputs, and model-level scoring so you can compare approaches and results.