Data EngineerData Warehouse PatternsAdvancedSingle prompt

Query Performance Tuning AI Prompt

This prompt tunes a slow warehouse query methodically by analyzing the execution plan and then rewriting the most expensive parts. It is helpful when teams need to bring query latency down while also reducing scanned data and compute cost. The answer should prioritize changes with the biggest likely payoff first.

Prompt text
Tune this slow data warehouse query for performance.

Query: {{slow_query}}
Current runtime: {{current_runtime}}
Target runtime: {{target_runtime}}
Platform: {{platform}}

Work through these optimizations in order:

1. Execution plan analysis:
   - Run EXPLAIN ANALYZE (or platform equivalent)
   - Identify the most expensive operations: full table scans, hash joins on large tables, sorts on large datasets
   - Check estimated vs actual row counts — large divergence indicates stale statistics

2. Filter pushdown:
   - Ensure WHERE clause filters on partitioned/clustered columns appear as early as possible
   - Check if filters are being applied before or after a JOIN — move them before the JOIN
   - Replace HAVING with WHERE where possible (filter before aggregation)

3. Join optimization:
   - Order JOINs from smallest to largest result set
   - Use broadcast/replicate hint for small dimension tables
   - Check for accidental cartesian products (missing JOIN conditions)
   - Replace correlated subqueries with JOINs or window functions

4. Aggregation optimization:
   - Pre-aggregate before joining to reduce row count going into the join
   - Use approximate aggregations (APPROX_COUNT_DISTINCT) where exact precision is not required
   - Push GROUP BY to a subquery before the outer SELECT

5. Materialization:
   - If this query runs frequently: materialize it as a table and schedule refresh
   - Create a summary table at the right grain to avoid full re-aggregation each time

6. Statistics:
   - Run ANALYZE TABLE to refresh statistics if the query plan looks wrong
   - Check column statistics: histograms for skewed columns, NDV for join columns

Return: annotated execution plan, specific rewrites for each optimization applied, and before/after runtime comparison.

When to use this prompt

Use case 01

When a query is too slow for dashboards, pipelines, or ad hoc analysis.

Use case 02

When EXPLAIN output is available or can be generated.

Use case 03

When you need specific rewrites, not general SQL advice.

Use case 04

When deciding whether to optimize the query itself or materialize the result.

What the AI should return

Return an annotated optimization plan based on execution-plan findings. Include query rewrites, join or aggregation improvements, statistics recommendations, and any materialization options. Show what each change is expected to improve and summarize the likely before/after runtime picture.

How to use this prompt

1

Open your data context

Load your dataset, notebook, or working environment so the AI can operate on the actual project context.

2

Copy the prompt text

Use the copy button above and paste the prompt into the AI assistant or prompt input area.

3

Review the output critically

Check whether the result matches your data, assumptions, and desired format before moving on.

4

Chain into the next prompt

Once you have the first result, continue deeper with related prompts in Data Warehouse Patterns.

Frequently asked questions

What does the Query Performance Tuning prompt do?+

It gives you a structured data warehouse patterns starting point for data engineer work and helps you move faster without starting from a blank page.

Who is this prompt for?+

It is designed for data engineer workflows and marked as advanced, so it works well as a guided starting point for that level of experience.

What type of prompt is this?+

Query Performance Tuning is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.

Can I use this outside MLJAR Studio?+

Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.

What should I open next?+

Natural next steps from here are Data Vault Design, Fact Table Loading Pattern, Medallion Architecture Design.