Use it when you want to begin dbt documentation work without writing the first draft from scratch.
dbt Model Documentation AI Prompt
Write comprehensive dbt documentation for this model. Model name: {{model_name}} Layer: {{layer}} (staging, intermediate, mart) Grain: {{grain}} Key columns: {{columns}} Upstrea... Copy this prompt template, run it in your AI tool, and use related prompts to continue the workflow.
Write comprehensive dbt documentation for this model.
Model name: {{model_name}}
Layer: {{layer}} (staging, intermediate, mart)
Grain: {{grain}}
Key columns: {{columns}}
Upstream models: {{upstream}}
1. Model-level description:
models:
- name: fct_orders
description: |
Fact table capturing all customer orders at the order grain.
One row per unique order. Includes financial metrics, fulfillment
status, and customer and product dimension keys for joining.
Source: {{ source('app', 'orders') }} joined with shipping data.
Grain: one row per order_id.
Refresh: incremental, daily at 06:00 UTC.
Owner: Data team (analytics-eng@company.com)
2. Column-level documentation:
columns:
- name: order_id
description: Unique identifier for each order. Primary key.
tests: [unique, not_null]
- name: customer_id
description: Foreign key to dim_customers. The customer who placed the order.
tests:
- relationships:
to: ref('dim_customers')
field: customer_id
- name: order_amount_usd
description: |
Total order value in USD at time of order, inclusive of all line items
and exclusive of shipping fees and taxes. Negative values indicate refunds.
3. Meta fields for data catalog integration:
meta:
owner: 'analytics-engineering'
domain: 'finance'
tier: 'gold'
pii: false
sla_hours: 4
4. Tags for organization:
config:
tags: ['finance', 'daily', 'mart']
5. Generating and hosting docs:
dbt docs generate → builds the catalog.json artifact
dbt docs serve → local documentation site
For production: host the generated docs/ folder on:
- dbt Cloud: built-in docs hosting
- GitHub Pages or Netlify (static site deployment)
- Internal data catalog (DataHub, Atlan, Alation) via dbt artifact import
Return: complete schema.yml entry for the model, column documentation, meta fields, and documentation hosting recommendation.When to use this prompt
Use it when you want a more consistent structure for AI output across projects or datasets.
Use it when you want prompt-driven work to turn into a reusable notebook or repeatable workflow later.
Use it when you want a clear next step into adjacent prompts in dbt Documentation or the wider Analytics Engineer (dbt) library.
What the AI should return
The AI should return a structured result that covers the main requested outputs, such as Model-level description:, name: fct_orders, Column-level documentation:. The final answer should stay clear, actionable, and easy to review inside a dbt documentation workflow for analytics engineer (dbt) work.
How to use this prompt
Open your data context
Load your dataset, notebook, or working environment so the AI can operate on the actual project context.
Copy the prompt text
Use the copy button above and paste the prompt into the AI assistant or prompt input area.
Review the output critically
Check whether the result matches your data, assumptions, and desired format before moving on.
Chain into the next prompt
Once you have the first result, continue deeper with related prompts in dbt Documentation.
Frequently asked questions
What does the dbt Model Documentation prompt do?+
It gives you a structured dbt documentation starting point for analytics engineer (dbt) work and helps you move faster without starting from a blank page.
Who is this prompt for?+
It is designed for analytics engineer (dbt) workflows and marked as beginner, so it works well as a guided starting point for that level of experience.
What type of prompt is this?+
dbt Model Documentation is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.
Can I use this outside MLJAR Studio?+
Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.
What should I open next?+
Natural next steps from here are dbt Governance and Standards, dbt Lineage and Impact Analysis.