when large datasets should be versioned alongside code without storing them in Git
Data Versioning with DVC AI Prompt
This prompt introduces DVC-based data versioning and pipeline tracking for an ML project. It covers remote storage, tracked datasets, stage definitions, experiments, metrics, and CI integration so data and pipeline state remain reproducible over time.
Set up data versioning and pipeline tracking for this ML project using DVC.
1. DVC initialization:
- dvc init in the Git repository
- Configure remote storage: S3, GCS, or Azure Blob
- .dvcignore file for files to exclude
2. Data versioning:
- Track large data files and directories: dvc add data/raw/
- Commit .dvc files to Git, push data to remote: dvc push
- Retrieve a specific data version: git checkout {commit} && dvc pull
- List data versions and their Git commits for audit trail
3. DVC pipeline definition (dvc.yaml):
- Define pipeline stages: preprocess → train → evaluate
- For each stage: deps (inputs), outs (outputs), params (config values), metrics (metrics.json)
- Cache: DVC caches stage outputs — skips re-running unchanged stages
- Run the pipeline: dvc repro
4. Experiment tracking:
- dvc exp run for tracking experiments with different params
- dvc exp show to compare experiments in a table
- dvc exp branch to create a Git branch from a promising experiment
5. Metrics and params tracking:
- Save metrics as JSON: accuracy, loss, etc.
- dvc metrics show, dvc metrics diff to compare across commits
- dvc params diff to see which params changed between runs
6. CI/CD integration:
- dvc pull in CI before running tests
- dvc repro in CI to re-run the pipeline if deps changed
- dvc push in CI to save new data artifacts after processing
Return: dvc.yaml pipeline definition, Git workflow for data versioning, and CI/CD integration.When to use this prompt
when preprocessing, training, and evaluation should be defined as reproducible stages
when experiment comparison should include params and metrics in version control
when CI should be able to pull data and reproduce the pipeline
What the AI should return
A DVC setup including dvc.yaml stages, data versioning workflow, experiment commands, and CI integration guidance.
How to use this prompt
Open your data context
Load your dataset, notebook, or working environment so the AI can operate on the actual project context.
Copy the prompt text
Use the copy button above and paste the prompt into the AI assistant or prompt input area.
Review the output critically
Check whether the result matches your data, assumptions, and desired format before moving on.
Chain into the next prompt
Once you have the first result, continue deeper with related prompts in MLOps and CI/CD.
Frequently asked questions
What does the Data Versioning with DVC prompt do?+
It gives you a structured mlops and ci/cd starting point for ml engineer work and helps you move faster without starting from a blank page.
Who is this prompt for?+
It is designed for ml engineer workflows and marked as advanced, so it works well as a guided starting point for that level of experience.
What type of prompt is this?+
Data Versioning with DVC is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.
Can I use this outside MLJAR Studio?+
Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.
What should I open next?+
Natural next steps from here are Automated Retraining Trigger, CI/CD for ML Pipeline, MLOps Platform Chain.