When you need column-level traceability across a data platform.
Data Lineage Tracking AI Prompt
This prompt designs lineage tracking so teams can understand how data moves from source columns through transformations into analytical outputs. It supports impact analysis, root-cause investigation, and compliance questions, especially in platforms with dbt, Spark, and orchestration tools. The answer should treat lineage as an operational capability, not just documentation.
Implement column-level data lineage tracking for this data platform.
1. Lineage metadata model:
- Node types: source_system, table, column, transformation, pipeline_run
- Edge types: table_derives_from, column_derives_from, transformation_reads, transformation_writes
- Lineage table DDL:
```sql
CREATE TABLE column_lineage (
target_table VARCHAR,
target_column VARCHAR,
source_table VARCHAR,
source_column VARCHAR,
transformation_description VARCHAR,
pipeline_name VARCHAR,
recorded_at TIMESTAMP
)
```
2. Automated lineage extraction:
- For dbt: parse dbt's manifest.json — it contains full column-level lineage from ref() and source() calls
- For Spark SQL: parse the SQL AST to extract table and column references
- For Airflow DAGs: extract lineage from task input/output datasets (OpenLineage / Marquez)
3. Lineage use cases:
- Impact analysis: 'if I change this source column, which downstream tables and reports are affected?'
- Root cause analysis: 'this report column has wrong values — trace back to the source'
- Compliance: 'which tables contain data derived from PII column X?'
4. Lineage UI (if building custom):
- Graph visualization: nodes are tables/columns, edges are derivation relationships
- Search: find all downstream consumers of a given column
- Highlight path from a source column to a final report metric
5. OpenLineage integration:
- Emit OpenLineage events from Airflow and Spark jobs
- Store in Marquez or forward to data catalog (DataHub, Atlan, Alation)
Return: lineage metadata DDL, automated extraction script for dbt, impact analysis query, and PII propagation query.When to use this prompt
When impact analysis is required before schema changes.
When debugging incorrect metrics back to their source.
When PII propagation or compliance lineage must be demonstrable.
What the AI should return
Return lineage metadata DDL, extraction approach for the stated tools, and example queries for impact analysis and PII tracing. Include a description of node and edge types and how lineage events are recorded over time. The output should be specific enough to guide a first implementation.
How to use this prompt
Open your data context
Load your dataset, notebook, or working environment so the AI can operate on the actual project context.
Copy the prompt text
Use the copy button above and paste the prompt into the AI assistant or prompt input area.
Review the output critically
Check whether the result matches your data, assumptions, and desired format before moving on.
Chain into the next prompt
Once you have the first result, continue deeper with related prompts in Data Quality.
Frequently asked questions
What does the Data Lineage Tracking prompt do?+
It gives you a structured data quality starting point for data engineer work and helps you move faster without starting from a blank page.
Who is this prompt for?+
It is designed for data engineer workflows and marked as intermediate, so it works well as a guided starting point for that level of experience.
What type of prompt is this?+
Data Lineage Tracking is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.
Can I use this outside MLJAR Studio?+
Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.
What should I open next?+
Natural next steps from here are Data Quality Framework Chain, Data Quality Test Suite, Duplicate Detection at Scale.