AI Data Analysis Benchmarks for Statistics

We defined practical analysis workflows from multiple domains, then ran them with AI Data Analyst using different LLM engines. On this page you can browse each workflow, open full notebook conversations, and compare model quality in shared score tables. The overall results show that modern LLMs perform very well on structured data analysis tasks.

Statistics Workflow Examples

Browse reproducible AI data analysis workflows in Statistics. Open any example to review prompts, conversation steps, generated code, outputs, and model-level quality scores.

Hypothesis Testing in Python (t-test, ANOVA)

Perform t-tests, chi-square tests, and ANOVA using real data to answer business questions — guided by an AI data analyst.

Open analysis →

Linear Regression Analysis in Python

Run simple and multiple linear regression, interpret coefficients, check assumptions, and evaluate model fit using statsmodels and scikit-learn.

Open analysis →

Model Comparison for Statistics

Compare LLM performance across workflows in this category. Open any score chip to jump directly to that model run and inspect the full conversation and notebook output.

Average score (0-10)

gemma4:31b
10.00
n=2
glm-5.1
10.00
n=2
gpt-5.4
10.00
n=2
gpt-oss:120b
10.00
n=2
qwen3-coder-next
10.00
n=2
qwen3.5:397b
10.00
n=2

gemma4:31b

Average score: 10.00/10

Scored workflows: 2

glm-5.1

Average score: 10.00/10

Scored workflows: 2

gpt-5.4

Average score: 10.00/10

Scored workflows: 2

gpt-oss:120b

Average score: 10.00/10

Scored workflows: 2

qwen3-coder-next

Average score: 10.00/10

Scored workflows: 2

qwen3.5:397b

Average score: 10.00/10

Scored workflows: 2

Detailed Workflow Comparison Table for Statistics

This table compares model scores for each workflow in Statistics. Open any score chip to jump directly to the selected model conversation and review full prompts, code, outputs, and score cards.

Workflowgemma4:31bglm-5.1gpt-5.4gpt-oss:120bqwen3-coder-nextqwen3.5:397b
Hypothesis Testing in Python (t-test, ANOVA)
hypothesis-testing-python
10.0/1010.0/1010.0/1010.0/1010.0/1010.0/10
Linear Regression Analysis in Python
regression-analysis-python
10.0/1010.0/1010.0/1010.0/1010.0/1010.0/10

What This Benchmark Shows

We tested the same step-by-step data analysis workflows across multiple LLM models and compared results using a shared scoring rubric. In Statistics, most models produce strong notebook outputs with high task completion and useful analytical reasoning. Use these examples as a reference for prompt design, model selection, and workflow quality before running similar analyses on your own data in MLJAR Studio.

Start using AI for Statistics

MLJAR Studio helps you analyze data with AI, run machine learning workflows, and build reproducible notebook-based results on your own computer.

Runs locally • Supports local LLMs