when compressing a strong but slow teacher model into a faster student
Knowledge Distillation AI Prompt
This prompt implements knowledge distillation so a smaller student model can learn from a larger teacher using soft targets and optional intermediate feature matching. It is useful when you want much of the teacher's accuracy in a cheaper model.
Implement knowledge distillation to train a smaller student model to match a larger teacher model.
Teacher model: {{teacher_model}} (large, high-accuracy, slow)
Student model: {{student_model}} (small, faster, to be trained)
1. Soft target distillation (Hinton et al. 2015):
- Get teacher soft probabilities: softmax(teacher_logits / temperature)
- Student loss = α × KL_divergence(student_soft, teacher_soft) + (1-α) × CrossEntropy(student, hard_labels)
- Temperature T: higher T produces softer distributions (try T=3, T=5, T=10)
- α: weight between distillation loss and task loss (try α=0.7)
2. Intermediate layer distillation (better for deep networks):
- Match intermediate feature maps between teacher and student layers
- Use an adapter layer if teacher and student have different hidden dimensions
- Feature distillation loss: MSE(student_features, teacher_features)
3. Training procedure:
- Freeze teacher model (no gradients)
- Train student with combined loss
- Use a slightly higher learning rate than training from scratch
- Run for same number of epochs as training student from scratch
4. Evaluation:
- Student accuracy vs teacher accuracy
- Student accuracy vs same architecture trained from scratch (distillation should outperform)
- Student inference latency vs teacher inference latency
5. Self-distillation variant:
- If no pre-trained teacher exists: use the model's own earlier epochs as the teacher
Return: distillation training loop, temperature sweep results, student vs teacher benchmark, and comparison to training from scratch.When to use this prompt
when training a student from scratch underperforms
when you want to test temperature and alpha settings systematically
when intermediate feature distillation may improve student quality
What the AI should return
A distillation training loop, temperature sweep guidance, and a comparison of teacher, distilled student, and student-from-scratch performance.
How to use this prompt
Open your data context
Load your dataset, notebook, or working environment so the AI can operate on the actual project context.
Copy the prompt text
Use the copy button above and paste the prompt into the AI assistant or prompt input area.
Review the output critically
Check whether the result matches your data, assumptions, and desired format before moving on.
Chain into the next prompt
Once you have the first result, continue deeper with related prompts in Model Compression.
Frequently asked questions
What does the Knowledge Distillation prompt do?+
It gives you a structured model compression starting point for ml engineer work and helps you move faster without starting from a blank page.
Who is this prompt for?+
It is designed for ml engineer workflows and marked as intermediate, so it works well as a guided starting point for that level of experience.
What type of prompt is this?+
Knowledge Distillation is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.
Can I use this outside MLJAR Studio?+
Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.
What should I open next?+
Natural next steps from here are Compression Pipeline Chain, ONNX Export and Validation, Post-Training Quantization.