when a training script has become too hardcoded or difficult to reproduce
Training Pipeline as Code AI Prompt
This prompt refactors an ad-hoc training script into a reproducible pipeline with configuration management, stage separation, artifact versioning, and a CLI. It is useful when a one-off training file has grown into something that needs repeatable execution and maintenance.
Refactor this ad-hoc training script into a reproducible, configurable ML pipeline.
1. Configuration management:
- Move all hyperparameters and paths to a config file (YAML or JSON)
- Use OmegaConf or Hydra for hierarchical config with command-line overrides
- Never hardcode paths — all paths are config variables with sensible defaults
- Log the full resolved config at the start of every run
2. Pipeline stages as separate functions or classes:
- data_preprocessing(): validate, clean, and split data
- train(): train model with given config
- evaluate(): evaluate on test set and return metrics dict
- export(): save model in deployment format
- Each stage is independently runnable and testable
3. Artifact management:
- Every run saves to a versioned output directory: outputs/{run_id}/
- Artifacts: model checkpoint, config copy, metrics JSON, training plots
- Symlink outputs/latest → most recent run for convenience
4. CLI interface:
- python train.py --config configs/base.yaml --overrides learning_rate=1e-4
- Subcommands: train, evaluate, export, full (all stages)
5. Dependency management:
- requirements.txt with pinned versions
- Optional: pyproject.toml with extras for training vs inference
6. Entry point guard:
- All DataLoader workers require if __name__ == '__main__': guard on Windows
Return: refactored pipeline structure, Hydra config setup, and CLI interface.When to use this prompt
when config files and command-line overrides are needed
when preprocessing, training, evaluation, and export should be separate stages
when each run should save versioned artifacts in a predictable structure
What the AI should return
A refactored ML pipeline structure with Hydra or OmegaConf config, CLI commands, stage functions, and artifact management conventions.
How to use this prompt
Open your data context
Load your dataset, notebook, or working environment so the AI can operate on the actual project context.
Copy the prompt text
Use the copy button above and paste the prompt into the AI assistant or prompt input area.
Review the output critically
Check whether the result matches your data, assumptions, and desired format before moving on.
Chain into the next prompt
Once you have the first result, continue deeper with related prompts in MLOps and CI/CD.
Frequently asked questions
What does the Training Pipeline as Code prompt do?+
It gives you a structured mlops and ci/cd starting point for ml engineer work and helps you move faster without starting from a blank page.
Who is this prompt for?+
It is designed for ml engineer workflows and marked as beginner, so it works well as a guided starting point for that level of experience.
What type of prompt is this?+
Training Pipeline as Code is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.
Can I use this outside MLJAR Studio?+
Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.
What should I open next?+
Natural next steps from here are Automated Retraining Trigger, CI/CD for ML Pipeline, Data Versioning with DVC.