Data ScientistModel BuildingIntermediateSingle prompt

Model Comparison AI Prompt

This prompt compares several common algorithm families on equal footing. It is useful when you want to identify strong candidates before investing in tuning or ensembling. It also adds operational context through training time, inference speed, and memory usage.

Prompt text
Train and compare multiple candidate models for predicting {{target_variable}}.

Train these models with default hyperparameters:
1. Logistic Regression / Linear Regression
2. Random Forest (n_estimators=200)
3. Gradient Boosting — XGBoost or LightGBM
4. Support Vector Machine (RBF kernel, scaled features)
5. k-Nearest Neighbors (k=10)

For each model:
- 5-fold cross-validated score (mean ± std)
- Training time
- Inference time per 1000 rows
- Memory usage

Return a ranked comparison table.
Recommend the top 2 models to take forward for hyperparameter tuning, with justification.
Flag any model that is significantly overfitting (train score >> validation score).

When to use this prompt

Use case 01

You want to shortlist promising algorithms for a supervised problem.

Use case 02

You need more than one metric and care about runtime or memory too.

Use case 03

You want cross-validated evidence before tuning models.

Use case 04

You need to spot overfitting early across several model families.

What the AI should return

A ranked comparison table across all candidate models with cross-validated performance, variance, training cost, inference speed, memory usage, and a recommendation of the top two models to tune further.

How to use this prompt

1

Open your data context

Load your dataset, notebook, or working environment so the AI can operate on the actual project context.

2

Copy the prompt text

Use the copy button above and paste the prompt into the AI assistant or prompt input area.

3

Review the output critically

Check whether the result matches your data, assumptions, and desired format before moving on.

4

Chain into the next prompt

Once you have the first result, continue deeper with related prompts in Model Building.

Frequently asked questions

What does the Model Comparison prompt do?+

It gives you a structured model building starting point for data scientist work and helps you move faster without starting from a blank page.

Who is this prompt for?+

It is designed for data scientist workflows and marked as intermediate, so it works well as a guided starting point for that level of experience.

What type of prompt is this?+

Model Comparison is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.

Can I use this outside MLJAR Studio?+

Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.

What should I open next?+

Natural next steps from here are AutoML Benchmark, Baseline Model, Class Imbalance Handling.